Polymer - cloneNode including __data - polymer

I am using the library dragula for doing some drag & drop stuff.
Dragula internally uses cloneNode(true) to create a copy of the dragged element that will be appended to the body to show the preview image while dragging.
Unfortunately, if dragging a polymer element, the bound data get's not cloned. By consequence the contents of the dragged element (e.g. <div>[[someString]]</div>) are empty.
Is there a solution for this?
I actually do not need the data to be bound for my element, it is just a "read-only" element that displays some data that does not change after being initialized. Is there maybe a way to somehow "resolve" the strings to the html without being bound anymore?
Thank you already!

Found a solution myself. You have to override the cloneNode method inside the polymer class:
cloneNode(deep) {
let cloned = super.cloneNode(deep);
for (let prop in MyClass.properties) {
cloned[prop] = this[prop];
}
return cloned;
}

Related

How should I access generated children of a custom HTML component in an idiomatic React way?

I am attempting to create a search bar using a custom HTML component for predictive text input. The way this component is built, it generates several plain HTML children that I need to act on to get full features. Specifically, I need to execute a blur action on one of the generated elements when the user presses escape or enter.
I got it to work using a ref on the custom component and calling getElementsByClassName on the ref, but using getElementsByClassName does not seem like the best solution. It pierces through the virtual and has odd side effects when testing.
This is a snippet of the component being rendered:
<predictive-input id='header-search-bar-input' type='search'
value={this.state.keywords}
ref={(ref: any) => this.predictiveInput = ref}
onKeyDown={(e: React.KeyboardEvent<any>) => this.handleKeyDown(e)}>
</predictive-input>
and the keyDown handler:
private handleKeyDown(e: React.KeyboardEvent<any>) {
// must access the underlying input element of the kat-predictive-input
let input: HTMLElement = this.predictiveInput.getElementsByClassName('header-row-text value')[0] as HTMLElement;
if (e.key === 'Escape') {
// blur the predictive input when the user presses escape
input.blur();
} else if (e.key === 'Enter') {
// commit the search when user presses enter
input.blur();
// handles action of making actual search, using search bar contents
this.commitSearch();
}
}
The element renders two children, one for the bar itself and one for the predictive dropdown. The classes of the underlying in the first are 'header-row-text' and 'value', so the element is correctly selected, but I am worried that this is violating proper React style.
I am using React 16.2, so only callback refs are available. I would rather avoid upgrading, but if a 16.3+ solution is compelling enough, I could consider it.
If you don't have any control over the input then this is the best approach in my opinion.
It's not ideal, but as you're stuck with a 3rd party component you can only choose from the methods that are available to you. In this case, your only real options are to find the element based on its class, or its position in the hierarchy. Both might change if the package is updated, but if I had to choose which would be more stable, I'd go for className.

Automatically change the type of the elements in an array

I wrote a class for my project like this using typescript and react.
class myImage extends Image {
oriHeight: number;
}
After I uploaded two images I have an array named 'results' which is full of objects with type myImage.
[myImage, myImage]
When I click it in browser, I could see the data of oriHeight of each element.
Then I try to use results.map() method to traverse all the elements in that array.
results.map((result: myImage) => {
console.log(result);
var tmp = result.oriHeight;
console.log(tmp);
})
However, the output of result is no longer an object but an img tag (because the type of Image is a HTMLElement) which makes the data of result unreadable. So the output of every tmp is undefined.
I am confused about that. Why the myImage object will become an img tag when I want to traverse it? I hope someone could help me with that. Really appreciate it.
I bet your data is actually fine. When you console log an html element, the chrome console displays it as an html tag instead of the javascript object.
Update: It's generally a bad practice to add your own properties to DOM elements because they're harder to debug and you risk them being overwritten by future browser properties. Instead, you could create a javascript object that contains both the image and your custom property. Here's an example interface definition:
interface MyImage {
imageEl: HTMLImageElement;
oriHeight: number;
}

Polymer: register a behaviour at runtime

I need to setup the behaviour of a polymer web-compontent at runtime. I tried to change the "behaviours" array by pushing the new behaviour, but it didn't work. Is there a proper way to do it?
I'm trying to create a table web-component with a pager at bottom. It should be extensible allowing the loading of data from a javascript array, a restful service or a custom source. Thus, I decided to create a behaviour for each one of these source and change it when the source changes. Is it a correct way to design it?
Here as example the source code of the behaviour to load data from an array. It has the following function:
itemsLoad: function(page, itemsPerPage, callback) {...
which is called from the web-component to load data of a specific page. My idea is that each behaviour based on the type of data source (e.g. CSV, JSON, etc.) will implement this method in a different way. Then, the behaviour will be registered at run-time, because is at run-time that the developers knows which is the source to use.
I don't think you will be able to change behaviours at run-time, because they are mixed into the element prototype.
What you can do is create a separate element for each of your cases (csv, json, etc) and create nodes dynamically as required. You could than place that element inside your grid
<table-component>
<json-data-source></json-data-source>
</table-component>
The <table-component> would look for a child element which implements itemsLoad to get the data.
EDIT
To work with child nodes you would use Polymer's DOM API. For example you could listen to added child nodes and select one that implements the itemsLoad method.
Polymer({
attached: function() {
Polymer.dom(this).observeNodes(function(info) {
var newNodes = info.addedNodes;
for(var i=0; i<newNodes.length; i++) {
var dataSource = newNodes[i];
if(dataSource.itemsLoad && typeof dataSource.itemsLoad === 'function') {
this.loadItems(dataSource);
break;
}
}
});
}
loadItems: function(dataSource) {
dataSource.itemsLoad().then(...);
}
});
You could replace Polymer.dom(this).observeNodes with simply iteration over Polymer.dom(this).children. Whichever works best for you.

How to access more than 2 DOM elements "The AngularJS way"?

I'm starting to learn angularJS better, and I've noticed that AngularJS tries to make strong emphasis on separating the view from the controller and encapsulation. One example of this is people telling me DOM manipulation should go in directives. I kinda got the hang of it now, and how using link functions that inject the current element allow for great behavior functionality, but this doesn't explain a problem I always encounter.
Example:
I have a sidebar I want to open by clicking a button. There is no way to do this in button's directive link function without using a hard-coded javascript/jquery selector to grab the sidebar, something I've seen very frowned upon in angularJS (hard-coding dom selectors) since it breaks separation of concerns. I guess one way of getting around this is making each element I wish to manipulate an attribute directive and on it's link function, saving a reference it's element property into a dom-factory so that whenever a directive needs to access an element other than itself, it can call the dom-factory which returns the element, even if it knows nothing where it came from. But is this the "Angular way"?
I say this because in my current project I'm using hard-coded selectors which are already a pain to mantain because I'm constantly changing my css. There must be a better way to access multiple DOM elements. Any ideas?
There are a number of ways to approach this.
One approach, is to create a create a sidebar directive that responds to "well-defined" broadcasted messages to open/close the sidebar.
.directive("sidebar", function(){
return {
templateUrl: "sidebar.template.html",
link: function(scope, element){
scope.$root.$on("openSidebar", function(){
// whatever you do to actually show the sidebar DOM content
// e.x. element.show();
});
}
}
});
Then, a button could invoke a function in some controller to open a sidebar:
$scope.openSidebar = function(){
$scope.$root.$emit("openSidebar");
}
Another approach is to use a $sidebar service - this is somewhat similar to how $modal works in angularui-bootstrap, but could be more simplified.
Well, if you have a directive on a button and the element you need is outside the directive, you could pass the class of the element you need to toggle as an attribute
<button my-directive data-toggle-class="sidebar">open</button>
Then in your directive
App.directive('myDirective', function() {
return {
restrict: 'A',
link: function(scope, element, attrs) {
angular.element('.' + attrs.toggleClass).toggleClass('active');
}
};
}
You won't always have the link element argument match up with what you need to manipulate unfortunately. There are many "angular ways" to solve this though.
You could even do something like:
<div ng-init="isOpen = false" class="sidebar" ng-class="{'active': isOpen}" ng-click="isOpen = !isOpen">
...
</div>
The best way for directive to communicate with each other is through events. It also keeps with the separation of concerns. Your button could $broadcast on the $rootScope so that all scopes hear it. You would emit and event such as sidebar.open. Then the sidebar directive would listen for that event and act upon it.

mootools - using addEvent to element not working properly?

bangin' my head against this and it's starting to hurt.
I'm having trouble with adding an event to an element.
I'm able to add the event, and then call it immediately with element.fireEvent('click'), but once the element is attached to the DOM, it does not react to the click.
example code:
var el = new Element('strong').setStyle('cursor','pointer');
el.addEvent('click',function () { alert('hi!'); });
el.replaces(old_element); // you can assume old_element exists
el.fireEvent('click'); // alert fires
however, once I attach this to the DOM, the element is not reactive to the click. styles stick (cursor is pointer when I mouseover), but no event fires. tried mouseover as well, to no avail.
any clues here? am I missing something basic? I am doing this all over the place, but in this one instance it doesn't work.
EDIT----------------
ok here's some more code. unfortunately I can't expose the real code, as it's for a project that is still under tight wraps.
basically, the nodes all get picked up as "replaceable", then the json found in the rel="" attribute sets the stage for what it should be replaced by. In this particular instance, the replaced element is a user name that should pop up some info when clicked.
again, if I fire the event directly after attaching it, all is good, but the element does not react to the click once it's attached.
HTML-----------
<p>Example: <span class='_mootpl_' rel="{'text':'foo','tag':'strong','event':'click','action':'MyAction','params':{'var1': 'val1','var2': 'val2'}}"></span></p>
JAVASCRIPT-----
assumptions:
1. below two functions are part of a larger class
2. ROOTELEMENT is set at initialize()
3. MyAction is defined before any parsing takes place (and is properly handled on the .fireEvent() test)
parseTemplate: function() {
this.ROOTELEMENT.getElements('span._mootpl_').each(function(el) {
var _c = JSON.decode(el.get('rel'));
var new_el = this.get_replace_element(_c); // sets up the base element
if (_c.hasOwnProperty('event')) {
new_el = this.attach_event(new_el, _c);
}
});
},
attach_event: function(el, _c) {
el.store(_c.event+'-action',_c.action);
el.store('params',_c.params);
el.addEvent(_c.event, function() {
eval(this.retrieve('click-action') + '(this);');
}).setStyle('cursor','pointer');
return el;
},
Works just fine. Test case: http://jsfiddle.net/2GX66/
debugging this is not easy when you lack content / DOM.
first - do you use event delegation or have event handlers on a parent / the parent element that do event.stop()?
if so, replace with event.preventDefault()
second thing to do. do not replace an element but put it somewhere else in the DOM - like document.body's first node and see if it works there.
if it does work elsewhere, see #1
though I realsie you said 'example code', you should write this as:
new Element('strong', {
styles: {
cursor: "pointer"
},
events: {
click: function(event) {
console.log("hi");
}
}
}).replaces(old_element);
no point in doing 3 separate statements and saving a reference if you are not going to reuse it. you really ought to show the ACTUAL code if you need advice, though. in this snippet you don't even set content text so the element won't show if it's inline. could it be a styling issue, what is the display on the element, inline? inline-block?
can you assign it a class that changes it on a :hover pseudo and see it do it? mind you, you say the cursor sticks which means you can mouseover it - hence css works. this also eliminates the possibility of having any element shims above it / transparent els that can prevent the event from bubbling.
finally. assign it an id in the making. assign the event to a parent element via:
parentEl.addEvent("click:relay(strong#idhere)", fn);
and see if it works that way (you need Element.delegate from mootools-more)
good luck, gotta love the weird problems - makes our job worth doing. it wouldn't be the worst thing to post a url or JSFIDDLE too...