xpath find link containing HTML in page - html

This is not the same question as xpath find specific link in page . I've got foo <em class="bar">baz</em>.. and need to find the link by the full foo <em class="bar">baz</em>. including the closing dot.

Note: I'm following up on OP's comment
A (visually) simpler variation of OP's own answer could be:
//a[. = "foo baz."][em[#class = "bar"] = "baz"]
or even:
//a[.="foo baz." and em[#class="bar"]="baz"]
(assuming you want to select the <a> node, and not the child <em>)
Regarding OP's question:
why the [em[]= doesn't need the dot?
Inside a predicate, testing = against a string on the right will convert the left part to a string, here <em> to its string representation, i.e. what string() would return.
XPath 1.0 specification document has an example of this:
chapter[title="Introduction"] selects the chapter children of the context node that have one or more title children with string-value equal to "Introduction"
Later, the same spec says on boolean tests:
If one object to be compared is a node-set and the other is a string, then the comparison will be true if and only if there is a node in the node-set such that the result of performing the comparison on the string-value of the node and the other string is true.
In OP's answer, //a[string() = 'bar baz.']/em[#class='bar' and .='baz'], the . is needed since the test on 'baz' is on the context node
Note that my answer is somewhat naive and assumes there's only 1 <em> child of <a>, because [em[#class="bar"]="baz"] is looking for one em[#class="bar"] matching the string-value condition, not that it's the only or first one.
Consider this input (a second <em class="bar"> child, but empty):
foo <em class="bar">baz</em><em class="bar"></em>..
and this test using Scrapy selectors
>>> import scrapy
>>> s = scrapy.Selector(text="""foo <em class="bar">baz</em><em class="bar"></em>..""")
>>> s.xpath('//a[.="foo baz." and em[#class="bar"]="baz"]').extract_first()
u'foo <em class="bar">baz</em><em class="bar"></em>.'
>>>
The XPath matches but you may not want this.

In my understanding XPath can't see the raw HTML markup, it works on the abstracted layer of the HTML document. Trying to incorporate as much information the HTML markup contains to an XPath expression would yield something like this :
//a[
node()[1][self::text() and .='foo ']
/following-sibling::node()[1][self::em[#class='bar' and .='baz']]
/following-sibling::node()[1][self::text() and .='.']
]
brief explanation about the predicate being used :
node()[1][self::text() and .='foo '] : having first child node a text node with value equals "foo"
/following-sibling::node()[1][self::em[#class='bar' and .='baz']] : followed directly by <em> having class equals "bar" and value equals "baz"
/following-sibling::node()[1][self::text() and .='.'] : followed directly by a text node having value equals "."

This is not 100% because there can be other HTML tags we have stripped by calling string() but for my purposes this looks enough:
//a[string() = 'bar baz.']/em[#class='bar' and .='baz']

Related

How a write a common XPath for same text displayed for different HTML tags?

I want to write a common XPath for the result displayed for my searched text 'Automation Server'
The same text is displayed for td HTML tags as well as for div html tags as shown below, and I wrote XPath as below based on my understanding by going through different article
displayed_text = //td[contains(text(),'Automation Server') or div[contains(text(),' Automation Server ')]
<td role="cell" mat-cell="" class="mat-cell cdk-cell cdk-column-siteName mat-column-siteName ng-star-inserted">Automation Server</td>
<div class="change-list-value ng-star-inserted"> Automation Server </div>
The operator you are looking for in XPath is |. It is a union operator and will return both sets of elements.
The XPath you are looking for is
//td[contains(text(),'Automation Server')] | //div[contains(text(),'Automation Server')]
This XPath,
//*[self::td or self::div][text()[normalize-space()='Automation Server']]
will select all td or div elements with an immediate text node whose normalize string value equals 'Automation Server'.
Cautions regarding other answers here
| is not logical-OR or "OR-like".
It is a union operator over node sets (XPath 1.0) or sequences (XPath 2.0+), not boolean values.
See: Logical OR in XPath? Why isn't | working?
contains(text(), "string") only tests the first text node child.
See: Why is contains(text(), "string" ) not working in XPath?
A few alternatives to JeffC answer, using common properties for both:
1. use the * as a wildcard for any element:
//*[contains(#class,'ng-star-inserted') and normalize-space(text())='Automation Server']
2. use in addition the local-name() function to narrow down the names of the elements:
//*[local-name()[.='td' or .='div']][contains(#class,'ng-star-inserted') and normalize-space(text())='Automation Server']
The normalize-space() function can be used to clean-up the optional white space, so a = operator can be used.
You could use the following XPath to test the local-name() of the element in a predicate and whether it's text() contains the phrase:
//*[(local-name() = "td" or local-name() = "div") and contains(text(), "Automation Server")]

Why is XPath contains(text(),'substring') not working as expected?

Let's say I have a piece of HTML like this:
<a>Ask Question<other/>more text</a>
I can match this piece of XPath:
//a[text() = 'Ask Question']
Or...
//a[text() = 'more text']
Or I can use dot to match the whole thing:
//a[. = 'Ask Questionmore text']
This post describes this difference between . (dot) and text(), but in short the first returns a single element, where the latter returns a list of elements. But this is where it gets a bit weird to me. Because while text() can be used to match either of the elements on the list, this is not the case when it comes to the XPath function contains(). If I do this:
//a[contains(text(), 'Ask Question')]
...I get the following error:
Error: Required cardinality of first argument of contains() is one or zero
How can it be that text() works when using a full match (equals), but doesn't work on partial matches (contains)?
For this markup,
<a>Ask Question<other/>more text</a>
notice that the a element has a text node child ("Ask Question"), an empty element child (other), and a second text node child ("more text").
Here's how to reason through what's happening when evaluating //a[contains(text(),'Ask Question')] against that markup:
contains(x,y) expects x to be a string, but text() matches two text nodes.
In XPath 1.0, the rule for converting multiple nodes to a string is this:
A node-set is converted to a string by returning the string-value of
the node in the node-set that is first in document order. If the
node-set is empty, an empty string is returned. [Emphasis added]
In XPath 2.0+, it is an error to provide a sequence of text nodes to a function expecting a string, so contains(text(),'substr') will cause an error for more than one matching text node.
In your case...
XPath 1.0 would treat contains(text(),'Ask Question') as
contains('Ask Question','Ask Question')
which is true. On the other hand, be sure to notice that contains(text(),'more text') will evaluate to false in XPath 1.0. Without knowing the (1)-(3) above, this can be counter-intuitive.
XPath 2.0 would treat it as an error.
Better alternatives
If the goal is to find all a elements whose string value contains the substring, "Ask Question":
//a[contains(.,'Ask Question')]
This is the most common requirement.
If the goal is to find all a elements with an immediate text node child equal to "Ask Question":
//a[text()='Ask Question']
This can be useful when wishing to exclude strings from descendent elements in a such as if you want this a,
<a>Ask Question<other/>more text</a>
but not this a:
<a>more text before <not>Ask Question</not> more text after</a>
See also
How contains() handles a nodeset first arg
How to use XPath contains() for specific text?
Testing text() nodes vs string values in XPath
The reason for this is that the contains function doesn't accept a nodeset as input - it only accepts a string. (Well, it may be engine dependent, because it works for Python's lxml module. According to the specification, it should convert the value of the first node in the set to a string and act on that. See also XPath contains(text(),'some string') doesn't work when used with node with more than one Text subnode)
//a[text() = 'Ask Question'] is matching any a elements which contain a text node which equals Ask Question.
//a[text() = 'more text'] is matching any a elements which contain a text node which equals more text.
So both of these expressions match the same a element.
You can re-work your query to //a[text()[contains(., 'Ask Question')]] so that the contains method will only act on a single text node at a time.

Why won't my XPath select link/button based on its label text?

<a href="javascript:void(0)" title="home">
<span class="menu_icon">Maybe more text here</span>
Home
</a>
So for above code when I write //a as XPath, it gets highlighted, but when I write //a[contains(text(), 'Home')], it is not getting highlighted. I think this is simple and should have worked.
Where's my mistake?
Other answers have missed the actual problem here:
Yes, you could match on #title instead, but that's not why OP's
XPath is failing where it may have worked previously.
Yes, XML and XPath are case sensitive, so Home is not the same as
home, but there is a Home text node as a child of a, so OP is
right to use Home if he doesn't trust #title to be present.
Real Problem
OP's XPath,
//a[contains(text(), 'Home')]
says to select all a elements whose first text node contains the substring Home. Yet, the first text node contains nothing but whitespace.
Explanation: text() selects all child text nodes of the context node, a. When contains() is given multiple nodes as its first argument, it takes the string value of the first node, but Home appears in the second text node, not the first.
Instead, OP should use this XPath,
//a[text()[contains(., 'Home')]]
which says to select all a elements with any text child whose string value contains the substring Home.
If there weren't surrounding whitespace, this XPath could be used to test for equality rather than substring containment:
//a[text()[.='Home']]
Or, with surrounding whitespace, this XPath could be used to trim it away:
//a[text()[normalize-space()= 'Home']]
See also:
Testing text() nodes vs string values in XPath
Why is XPath unclean constructed? Why is text() not needed in predicate?
XPath: difference between dot and text()
yes you are doing 2 mistakes, you're writing Home with an uppercase H when you want to match home with a lowercase h. also you're trying to check the text content, when you want to check check the "title" attribute. correct those 2, and you get:
//a[contains(#title, 'home')]
however, if you want to match the exact string home, instead of any a that has home anywhere in the title attribute, use #zsbappa's code.
You can try this XPath..Its just select element by attribute
//a[#title,'home']

How to parse HTML/XML tags according to NOT conditions in [r]

Dearest StackOverflow homies,
I'm playing with HTML that was output by EverNote and need to parse the following:
Note Title
Note anchor (hyperlink identities of the notes themselves)
Note Creation Date
Note Content, and
Intra-notebook hyperlinks (the
links within the content of a note to another note's anchor)
According to examples by Duncan Temple Lang, author of the [r] XML package and a SO answer by #jdharrison, I have been able to parse the Note Title, Note anchor, and Note Creation Dates with relative ease. For those who may be interested, the commands to do so are
require("XML")
rawHTML <- paste(readLines("EverNotebook.html"), collapse="\n") #Yes... this is noob code
doc = htmlTreeParse(rawHTML,useInternalNodes=T)
#Get Note Titles
html.titles<-xpathApply(doc, "//h1", xmlValue)
#Get Note Title Anchors
html.tAnchors<-xpathApply(doc, "//a[#name]", xmlGetAttr, "name")
#Get Note Creation Date
html.Dates<-xpathApply(doc, "//table[#bgcolor]/tr/td/i", xmlValue)
Here's a fiddle of an example HTML EverNote export.
I'm stuck on parsing 1. Note Contents and 2. Intra-notebook hyperlinks.
Taking a closer look at the code it is apparent the solution for the first part is to return every upper-most* div that does NOT include a table with attribute bgcolor="#D4DDE5." How is this accomplished?
Duncan says that it is possible to use XPath to parse XML according to NOT conditions:
"It allows us to express things such as "find me all nodes named a" or "find me all nodes named a that have no attribute named b" or "nodes a that >have an attribute b equal to 'bob'" or "find me all nodes a which have c as >an ancestor node"
However he does not go on to describe how the XML package can parse exclusions... so I'm stuck there.
Addressing the second part, consider the format of anchors to other notes in the same notebook:
<a href="#13178">
The goal with these is to procure their number and yet this is difficult because they are solely distinguished from www links by the # prefix. Information on how to parse for these particular anchors via partial matching of their value (in this case #) is sparse - maybe even requiring grep(). How can one use the XML package to parse for these special hrefs? I describe both problems here since it's possible a solution to the first part may aid the second... but perhaps I'm wrong. Any advice?
UPDATE 1
By upper-most div I intend to say outer-most div. The contents of every note in an EverNote HMTL export are within the DOMs outer-most divs. Thus the interest is to return every outer-most div that does NOT include a table with attribute bgcolor="#D4DDE5."
"....to return every upper-most div that does NOT include a table with attribute bgcolor="#D4DDE5." How is this accomplished?"
One possible way ignoring 'upper-most' as I don't know exactly how would you define it :
//div[not(table[#bgcolor='#D4DDE5'])]
Above XPath reads: select all <div> not having child element <table> with bgcolor attribute equals #D4DDE5.
I'm not sure about what you mean by "parse" in the 2nd part of the question. If you simply want to get all of those links having special href, you can partially match the href attribute using starts-with() or contains() :
//a[starts-with(#href, '#')]
//a[contains(#href, '#')]
UPDATE :
Taking "outer-most" div into consideration :
//div[not(table[#bgcolor='#D4DDE5']) and not(ancestor::div)]
Side note : I don't know exactly how XPath not() is defined, but if it works like negation in general, (this worked as confirmed by OP in the comment below) you can apply one of De Morgan's law :
"not (A or B)" is the same as "(not A) and (not B)".
so that the updated XPath can be slightly simplified to :
//div[not(table[#bgcolor='#D4DDE5'] or ancestor::div)]

XPath //div[contains(text(), 'string')] fails to select divs containing 'string'

This is the HTML code:
<div> <span></span> Elangovan </div>
I want to write an XPath for the div based on its contained text. I tried
//div[contains(text(),'Elangovan')]
but this is not working.
Replace text() with string():
//div[contains(string(), "Elangovan")]
Or, you can check that span's following text sibling contains the text:
//div[contains(span/following-sibling::text(), "Elangovan")]
Also see:
Difference between text() and string()
Alternatively to alecxe's correct answer (+1), the following slightly simpler and somewhat more idiomatic XPath will work the same way:
//div[contains(., "Elangovan")]
The reason that your original XPath with text() does not work is that text() will select all text node children of div. However, contains() expects a string in its first argument, and when given a node set of text nodes, it only uses the first one. Here, the first text node contains whitespace, not the sought after string, so the test fails. With the implicit . or the explicit string() first argument, all text node descendants are concatenated together before performing the contains() test, so the test passes.
To make #kjhughes's already good answer just a little more precise, what you're really asking for is a way to look for substrings in the div's string-value:
For every type of node, there is a way of determining a string-value
for a node of that type. For some types of node, the string-value is
part of the node; for other types of node, the string-value is
computed from the string-value of descendant nodes.
Both the context node (. or the div itself) and the set of nodes returned by text() -- or any other argument! -- are first converted to strings when passed to contains. It's just that they're converted in different ways, because one refers to a single element and the other refers to a node-set.
A single element's string-value is the concatenation of the string-values of all its text node descendants. A node-set's string-value, on the other hand, is the string-value of the node in the set that is first in document order.
So the real difference is in what you're converting to a string and how that conversion takes place.