What are the advantages of coupling/de-coupling HTML generation to event handling? - html

I'm currently deciding between two different approaches for generating HTML with event handlers. I know both approaches are viable (I've seen both used before), but I'm unclear on their respective advantages, and since I'll be stuck with whichever one I pick I was hoping someone might be able to clarify those advantages for me.
Here's a simplistic example to explain the idea (although my real world cases will of course be more complex). Let's say we want to have a button that alerts "Hello Bob". The first approach is to use our view logic to generate the HTML, and then rely on a separate, high-level event handler:
$('body').append('<button class="nameAlert" data-name="Bob">Click me</button>')
$('body').on('click .nameAlert', function(e) {
alert('Hello ' + $(e.target).data('name'));
})
The second approach is to build an element object, bind a handler to it, and then append it to the DOM, all at once:
var $button = $('<button>Click me</button');
var name = "Bob";
$button.on('click', function(e) {
alert('Hello ' + name);
});
$('body').append($button);
There are some obvious advantages to the latter approach (eg. no writing data attributes, all the logic is one place) but I'm really curious about the non-obvious advantages (eg. the first version will perform better if we wind up with a lot of these buttons on the page). I'm especially interested in maintainability (that should be a programmer's first priority, right?).
Thanks in advance for any help.

I think what you are asking about is event delegation. Event delegation allows you to bind the event handler on a parent DOM element and handle events that occur within it. There are a number of benefits from this pattern, not the greatest of which is less repetitive code:
Efficiency. The browser does not have to attach event handlers to multiple dom elements.
Memory Leaks. It helps avoid memory leaks caused by dom elements being removed from the DOM that still have a javascript object referring to them. Since your parent element typically stays in the DOM as its children change, this doesn't happen.
Live binding. Event delegation will fire on any descendent node that matches the delegation selector, which means they can be added after the fact and still work.
I'm sure there are other benefits but those are some of the primary reasons for choosing event delegation.

Related

LitElement lifecycle: before first render

Is there a way to execute a method exactly after the component has its properties available but before the first render?
I mean something between the class contructor() and firstUpdated().
It sounds trivial, maybe in fact I'm missing something trivial..
The element's constructor is called when the element is created, either through the HTML parser, or for example through document.createElement
The next callback is connectedCallback which is called when the DOM node is connected to the document. At this point, you have access to the element's light DOM. Make sure to call super.connectedCallback() before doing your own work, as the LitElement instance has some work to do here.
The next callback is shouldUpdate, which is an optional predicate that informs whether or not LitElement should run its render cycle. Useful if for example, you have a single observed data property and destructure deep properties of it in render. I've found that it's best to treat this one as a predicate, and not to add all sorts of lifecycle logic inside.
After that, update and render are called, then updated and firstUpdated. It's generally considered bad practice to perform side effects in render, and the occasions that you really need to override update are rare.
In your case, it sounds very much like you should do your work in connectedCallback, unless you are relying on LitElement's rendered shadow DOM, in which case, you might consider running your code in firstUpdated, then calling this.requestUpdate() to force a second update (or changing some observed property in firstUpdated)
More info: https://lit-element.polymer-project.org/guide/lifecycle

Polymer 2.0: Event listeners where is the ideal place to add it?

I've seen people adding the event listener on the "ready" function and others on "connectedCallback". My question is, what are the pros and cons of each place? On connected we are responsible to remove it; in ready, it will stay there, and I'm unsure if it is a problem.
Should I do this:
connectedCallback() {
super.connectedCallback();
this.addEventListener('click', this.myFunction.bind(this));
}
disconnectedCallback() {
super.disconnetedCallback();
this.removeEventListener('click', this.myFunction);
}
Or this:
ready() {
super.ready();
this.addEventListener('click', this.myFunction.bind(this));
}
Up until Polymer 1.x.whatever , the ready callback in the life cycle of an element, was called, once
the element registered its shadow DOM
any <content>'s were distributed
and then, post ready , attached was fired
So, you could possibly have used ready as a one time callback after everything was indeed ready
With Polymer 2.0 onwards, there have been contractual changes to how callbacks are fired, and
The ready callback no longer is guaranteed to execute after the new <slots> are distributed meaning, there is no surety that the ready itself will wait for content / light DOM distribution.
attached is now the new connectedCallback and is essentially useful for element level DOM manipulations such as setting attributes , appending children etc. This is a lifecycle change that happens after the slot nodes are distributed and the element itself is attached in the DOM hierarchy, but not necessarily after a paint.
SO, for any event that does not rely on any ::slotted content, use the ready callback
for anything that requires a knowledge of all distributed content along with the shadow DOM, use the connectedCallback
However, when possible, use the afterNextRender method of the super class Polymer , within your element's callback to add event listeners
these are what I could possibly think of.
All this and much more, if it helps, here
I haven't yet read about us having to remove an event listener from a lifecycle callback, or anything as such.
If you are referring to cases where, the element itself may be connected and disconnected dynamically / or in the flow of things,
And, with that in mind, you are adding an event listener on a global / native element within your element's life cycle callbacks ,
like attaching an event listener on the window inside your custom-element's ready or connectedCallback ,
Only in such cases, does polymer advise you to remove the event listener on disconnect

Avoiding custom events in Polymer

I was drawn to the idea of a "VanillaJS" approach mentioned by http://customelements.io/ for https://github.com/webcomponents/element-boilerplate . This drew me to http://www.polymer-project.org/platform/custom-elements.html (since I would like to do everything I can in JavaScript rather than use <link/> to do imports).
However, it seems that, contrary to the spirit of VanillaJS and polyfills, they seem to require their own custom "WebComponentsReady" event. (To me a polyfill is something which completely follows a standard or proposed standard, and lets you merely remove a script tag or load when it is fully supported and not need to change any other code.)
(Mozilla's x-tag (which is also not clear on whether it can be used purely as a polyfill or whether one requires the xtag global) uses a "DOMComponentsLoaded" event which I am not clear is standard.)
Neither event is mentioned at http://www.w3.org/TR/custom-elements/
Any way to work with these to use a standard event or avoid these otherwise without polling?
UPDATE
I see from http://lists.w3.org/Archives/Public/public-webapps/2013JulSep/0697.html that the "WebComponentsReady" event was discussed as being a candidate for specification but had not (at least at that time) been added, so I guess this event may be the safest bet since Mozilla's x-tag does use it internally immediately before firing its own "DOMComponentsLoaded" event and it was at least discussed as a possible candidate for standardization. :)
Neither event is part of the standards. Both Polymer's WebComponentsReady and x-tag's DOMComponentsLoaded are fired for convenience. This is one of those things you'd have to do yourself without a sugaring library. I suspect x-tag's fires hooks into the WebComponentsReady event because it uses the same set of polyfills as Polymer.
BTW, one reason to use an HTML Import (<link rel="import">) to load components is that you get aload` event. That could be one signal the component is ready.

Setting the granularity of the HTML5 audio event 'timeupdate'

I'm trying to create a simple looping feature using HTML5 audio and have a very primitive solution as follows:
$(audio).bind('timeupdate', function() {
if (audio.currentTime >= 26){
var blah = audio.currentTime;
audio.currentTime = 23;
console.log(blah);
}
})
This works fine but the only glitch here is that the timeupdate event doesnt trigger very consistently. For instance, the console.log above returned:
26.14031982421875
26.229642868041992
26.13462257385254
26.21796226501465
...etc. (You get the idea..inconsistent times)
Obviously this won't work for an application where the timing is important (music applications). So the obvious solution to me would be to increase the granularity with which the timeupdate event is triggered. I havent been able to find any API docs...but would love to know if there is a way to do this.
It should be noted that you can read the currentTime property of a video much more often. If time is really critical, and you can live with a little bit of uncertainty, you can try this instead.
It is, however, a lot less elegant than using the tag's own timeupdate event.
setInterval(function () {
console.log(audio.currentTime); // will get you a lot more updates.
}, 30);
In this case, you'll have to manage the interval yourself, make sure that you clear it before nulling the audio element. But it is a possibility.
I'm using this approach on a video element, but it should apply here, too.
I'm sorry, but that's the way it works. From the html5 specs:
Every 15 to 250ms, or whenever the MediaController's media controller position changes, whichever happens least often, the user agent must queue a task to fire a simple event named timeupdate at the MediaController.
Also,
The event thus is not to be fired faster than about 66Hz or slower than 4Hz (assuming the event handlers don't take longer than 250ms to run). User agents are encouraged to vary the frequency of the event based on the system load and the average cost of processing the event each time, so that the UI updates are not any more frequent than the user agent can comfortably handle while decoding the video.
If you read through the specification, you can get the idea that timeupdate event is something of a "best effort" kind of event. It will fire when it can and always as long as it does not to affect performance too much.
You could filter the events discarding some from time to time to smooth the arrival times, but I'm afraid it's not possible to do the opposite.

How to extend a native mootools method

Is it possible to extend the addEvent function in mootools to do something and also calls the normal addEvent method? Or if someone has a better way to do what I need I'm all years.
I have different 'click' handlers depending on which page I'm on the site. Also, there might be more than one on each page. I want to have every click on the page execute a piece of code, besides doing whatever that click listener will do. Adding that two lines on each of the handlers, would be a PITA to say the least, so I thought about overriding the addEvent that every time I add a 'click' listener it will create a new function executing the code and then calling the function.
Any idea how I could do it?
Whereas this is not impossible, it's a questionable practice--changing mootools internal apis. Unless you are well versed with mootools and follow dev direction on github and know your change won't break future compatibility, I would recommend against it.
The way I see it, you have two routes:
make a new Element method via implement that does your logic. eg: Element.addMyEvent that does your thing, then calls the normal element.addEvent after. this is preferable and has no real adverse effects (see above)
change the prototype directly. means you don't get to refactor any code and it will just work. this can mean others that get to work with your code will have difficulties following it as well as difficulties tracing/troubleshooting- think, somebody who knows mootools and the standard addEvent behaviour won't even think to check the prototypes if they get problems.
mootools 2.0 coming will likely INVALIDATE method 2 above if mootools moves away from Element.prototype modification in favour of a wrapper (for compatibility with other frameworks). Go back to method 1 :)
I think solution 1 is better and obvious.
as for 2: http://jsfiddle.net/dimitar/aTukP/
(function() {
// setup a proxy via the Element prototype.
var oldProto = Element.prototype.addEvent;
// you really need [Element, Document, Window] but this is fine.
Element.prototype.addEvent = function(type, fn, internal){
console.log("added " + type, this); // add new logic here. 'this' == element.
oldProto.apply(this, arguments);
};
})();
document.id("foo").addEvent("click", function(e) {
e.stop();
console.log("clicked");
console.log(e);
});
it is that simple. keep in mind Element.events also should go to document and window. also, this won't change the Events class mixin, for that you need to refactor Events.addEvent instead.