Slick (mootools) selector engine documentation - mootools

Mootools slick selector engine documentation seems kind of sparse / unfriendly.
http://mootools.net/docs/core/Slick/Slick
An example:
Normally i can reach the last child of an element with:
$('wrapper').getLast().setStyle('background-color','green');
how do i utilize the new slick engine to achieve the same?
And where is the documentation?
Should i just learn CSS3 selectors?
In their example they use $$('p.foo !^') to get the last child of p class foo whatever that means. (do they mean the last instance of p.foo in the $$ array or the last child of the last element???)
Here i tried to fiddle a bit, the last two doesn't work:
http://jsfiddle.net/XLVr6/1/

The example bellow will select the last child of the element with id="wrapper. It will only return one element.
$$('#wrapper !^').setStyle('background-color','red');
or better way as only one element is needed, as it is faster:
document.getElement('#wrapper !^').setStyle('background-color','red');
However, if it's written like this where we select the last child of all p-elements on the page with class="wrapper"
$$('p.wrapper !^').setStyle('background-color','red');
Another way to do it is like this, however next example is faster:
('someId').getLast().setStyle('background-color','red');
As pointed out by Dimitar this is a better (faster) way to do it:
document.getElement('#someId :last-child')
As for your fiddle, the two last selectors should be written like this:
$$('#wrapper :last-child').setStyle('background-color','red');
$$('#wrapper !^').setStyle('background-color','red');
Please note the space between "wrapper" and ":last-child", that is because we are selecting the last child of a the child elements of "wrapper".

Related

XPath select second element of many unless only one exists

I have a webpage with three <input> elements that all have the same name attribute. Ideally, I would like to select the second of these elements except sometimes there is only one element on the page and I want to instead select that element.
Ideally I would like something like (pseudo-code since max doesn't exist)
(//input[#name='myname'])[max(1, last()-1)]
I thought that maybe I could do something like the following except it yields all three elements
(//input[#name='myname'])[last()-1 or 1]
What is the best way to accomplish this using XPath?
Maybe grab both and then only the last one.
If there's two or more, it gets the second. If there's only one, it grabs that one.
((//input[#name='myname'])[position()=1 or position()=2])[last()]

How to select a child node by conditionally excluding a parents previous sibling

I have a question regarding using (what is to me) some complex XPath queries in Selenium IDE (thought they do apply to XPath in general).
Firstly, here is my scenario.
I'm writing some automated tests for a feature of a website I am working on that only certain items for sale on the website have. I'm trying to engineer the test in such a way that changes in data will not break it. Here is an abstraction of what I'm testing:
Given a set of search results, certain products within the results will have a feature (let's call the feature attributes), I want to click on the first result (which may change in the future) that has a single price and attributes.
I am using Selenium IDE 2.5.0 / FF 28.
Here is a JsFiddle I created that simulates the markup / DOM structure I have to work with (the markup cannot be changed): http://jsfiddle.net/xDaevax/3qUHB/6/
Here is my XPath query:
//div[contains(#class, 'primary')]//div[contains(#class, 'results')]//div[#class='price-range']/span[not(contains(#class, 'seperate'))]/../../..//a[#class='detail-link']
Essentially, the problem is this: All three have the same wrapping markup and css class information, but they differ in the price-range class due to the second element (the one I'm after) does not have "separate" or "minimum" CSS class elements.
I have made it this far with the XPath selector, but am stuck. I assume that when I traverse back up the DOM with the "/../..", I am losing the conditional XPath clause I previously used.
I apologize for the vagueness of the details, but due to contractual restrictions, I'm being as generic as possible.
Any suggestions on how to achieve the selection I want would be greatly appreciated. Please let me know if I need to clarify any of the requirements or steps I have tried.
Edit:
Here is a succinct description of the desired outcome.
In the markup example given, I want to select and click the link in the middle result element only. This is because the middle element has the desired "attributes" that once the link is clicked, it will take you to the product page which has additional things needing tested. That being said, the data could change: today it is the second element in the list, but maybe tomorrow it is the 7th element of 16 total elements.
My current logic for the XPath (though my solution does not work) is as follows: The element I am interested in is distinguishable from the other results because of two things: 1), it has a detail hyperlink (that will later be clicked) and 2) it does not have a range of prices (unlike the first result). Because the first result also has a hyperlink, the only difference between the two is that the first result has a minimum and separator markup element, while the second does not (my target link will always have a single price and not a range). Based on this information, I tried to write XPath that will select the first hyperlink that is not contained within an element that has a price range.
This expression will select all three div elements:
//div[contains(#class, 'primary')]
//div[contains(#class, 'results')]
//div[#class='price-range']
If I understood your requirements correctly, the price-range div must have a sibling that is an <a href> element, so we can filter out the last div by adding that restriction in a predicate: [../a[#href]]. So this expression selects only the first two divs:
//div[contains(#class, 'primary')]
//div[contains(#class, 'results')]
//div[#class='price-range']
[../a[#href]]
Now you can add one more predicate to remove the items that don't have a single price. You chose the separate class as the criterion, so we can change that last predicate and add another restriction to it: [../a[#href] and not(span[contains(#class,'separate')])]. Now your expression selects the div that you want:
//div[contains(#class, 'primary')]
//div[contains(#class, 'results')]
//div[#class='price-range']
[../a[#href] and not(span[contains(#class,'separate')])]
This is a location path, which creates a context. From this context, you can navigate anywhere you want. You can get the sibling <a href> adding a new step with its relative path: ../a. So, finally, this expression selects the link at the same level as your div:
//div[contains(#class, 'primary')]
//div[contains(#class, 'results')]
//div[#class='price-range']
[../a[#href] and not(span[contains(#class,'separate')])]
/../a
Or in one line:
//div[contains(#class, 'primary')]//div[contains(#class, 'results')]//div[#class='price-range'][../a[#href] and not(span[contains(#class,'separate')])]/../a

benefits of "class" vs "id" tags for a single use html element

I'm doing a code academy course and they ask me to use left and right column classes as opposed to id's. I'm not sure why...
It seems to me that I'm only going to have one Div that is left column, and one Div that is right column... so why would I use a class instead of an ID for this?
They probably want you to refer to the element in order to move it somehow to left. It is better to use a class because it is possible that at some point you'll want to move another element to left. If you use id instead of class there may be need to repeat the same CSS rule for two different elements (different IDs). Code repetition is considered a bad practice and should be avoided, if possible at design level (no need to rewrite anything later), hence the suggestion to use class instead of id.
IDs always perform well because they are unique per page, but somehow Code Academy could have the same ID. They might also want to avoid using IDs because of the dynamic application and structure, so we cannot predict how their skeleton is. I think it is as per their application logic.

html scoped IDs workaround

I need to find a way to implement the concept of "scoped IDs" in HTML/XML.
I know that the id attribute of an element must hold a unique value for the entire document, but I'm wondering if there's a workaround ('hack', 'cheat', whatever) that I can do to create scoped IDs. That is, for any particular sectioning/containing element, IDs would be unique, but outside of the container, those IDs would be hidden and couldn't be referenced. With nested sections, inner sections will still be able to access their parent section's element's IDs but not the other way around.
I thought about using <iframe>s, but those are just icky.
Maybe there's a solution using JavaScript/jQuery?
Not possible.
This is exactly what classes are for though. Give unique IDs to each "section" or container element and then use classes for the common descendant elements you wanted to use recurring IDs for, then target them with #unique-container .common-element selectors.
I find it hard to imagine a situation in which you would want to do what you described anyway. You are basically just asking if you can use IDs as classes, but that's why classes exist in the first place.
I suppose you could make some kind of psuedo-scoped-ID by adding custom HTML5 attributes to the elements and processing them / doing whatever you want to do with them in Javascript but again without any context as to why you want to do this it's hard to really recommend anything here.

Why do browsers match CSS selectors from right to left?

CSS Selectors are matched by browser engines from right to left. So they first find the children and then check their parents to see if they match the rest of the parts of the rule.
Why is this?
Is it just because the spec says?
Does it affect the eventual layout if it was evaluated from left to right?
To me the simplest way to do it would be use the selectors with the least number of elements. So IDs first (as they should only return 1 element). Then maybe classes or an element that has the fewest number of nodes — e.g. there may only be one span on the page so go directly to that node with any rule that references a span.
Here are some links backing up my claims
http://code.google.com/speed/page-speed/docs/rendering.html
https://developer.mozilla.org/en/Writing_Efficient_CSS
It sounds like that it is done this way to avoid having to look at all the children of parent (which could be many) rather than all the parents of a child which must be one. Even if the DOM is deep it would only look at one node per level rather than multiple in the RTL matching. Is it easier/faster to evaluate CSS selectors LTR or RTL?
Keep in mind that when a browser is doing selector matching it has one element (the one it's trying to determine style for) and all your rules and their selectors and it needs to find which rules match the element. This is different from the usual jQuery thing, say, where you only have one selector and you need to find all the elements that match that selector.
If you only had one selector and only one element to compare against that selector, then left-to-right makes more sense in some cases. But that's decidedly not the browser's situation. The browser is trying to render Gmail or whatever and has the one <span> it's trying to style and the 10,000+ rules Gmail puts in its stylesheet (I'm not making that number up).
In particular, in the situation the browser is looking at most of the selectors it's considering don't match the element in question. So the problem becomes one of deciding that a selector doesn't match as fast as possible; if that requires a bit of extra work in the cases that do match you still win due to all the work you save in the cases that don't match.
If you start by just matching the rightmost part of the selector against your element, then chances are it won't match and you're done. If it does match, you have to do more work, but only proportional to your tree depth, which is not that big in most cases.
On the other hand, if you start by matching the leftmost part of the selector... what do you match it against? You have to start walking the DOM, looking for nodes that might match it. Just discovering that there's nothing matching that leftmost part might take a while.
So browsers match from the right; it gives an obvious starting point and lets you get rid of most of the candidate selectors very quickly. You can see some data at http://groups.google.com/group/mozilla.dev.tech.layout/browse_thread/thread/b185e455a0b3562a/7db34de545c17665 (though the notation is confusing), but the upshot is that for Gmail in particular two years ago, for 70% of the (rule, element) pairs you could decide that the rule does not match after just examining the tag/class/id parts of the rightmost selector for the rule. The corresponding number for Mozilla's pageload performance test suite was 72%. So it's really worth trying to get rid of those 2/3 of all rules as fast as you can and then only worry about matching the remaining 1/3.
Note also that there are other optimizations browsers already do to avoid even trying to match rules that definitely won't match. For example, if the rightmost selector has an id and that id doesn't match the element's id, then there will be no attempt to match that selector against that element at all in Gecko: the set of "selectors with IDs" that are attempted comes from a hashtable lookup on the element's ID. So this is 70% of the rules which have a pretty good chance of matching that still don't match after considering just the tag/class/id of the rightmost selector.
Right to left parsing, also called as bottom-up parsing is actually efficient for the browser.
Consider the following:
#menu ul li a { color: #00f; }
The browser first checks for a, then li, then ul, and then #menu.
This is because as the browser is scanning the page it just needs to look at the current element/node and all the previous nodes/elements that it has scanned.
The thing to note is that the browser starts processing moment it gets a complete tag/node and needn't have to wait for the whole page except when it finds a script, in which case it temporarily pauses and completes execution of the script and then goes forward.
If it does the other way round it will be inefficient because the browser found the element it was scanning on the first check, but was then forced to continue looking through the document for all the additional selectors. For this the browser needs to have the entire html and may need to scan the whole page before it starts css painting.
This is contrary to how most libs parse dom. There the dom is constructed and it doesn't need to scan the entire page just find the first element and then go on matching others inside it .
It allows for cascading from the more specific to the less specific. It also allows a short circuit in application. If the more specific rule applies in all aspects that the parent rule applies to, all parent rules are ignored. If there are other bits in the parent, they are applied.
If you went the other way around, you would format according to parent and then overwrite every time the child has something different. In the long run, this is a lot more work than ignoring items in rules that are already taken care of.