I am writing an Android App that will read some info from a website and display it on the App's screen. I am using the Jsoup library to get the info in the form of a string. First, here's what the website html looks like:
<strong>
Now is the time<br />
For all good men<br />
To come to the aid<br />
Of their country<br />
</strong>
Here's how I'm retrieving and trying to parse the text:
Document document = Jsoup.connect(WEBSITE_URL).get();
resultAggregator = "";
Elements nodePhysDon = document.select("strong");
//check results
if (nodePhysDon.size()> 0) {
//get value
donateResult = nodePhysDon.get(0).text();
resultAggregator = donateResult;
}
if (resultAggregator != "") {
// split resultAggregator into an array breaking up with br /
String donateItems[] = resultAggregator.split("<br />");
}
But then donateItems[0] is not just "Now is the time", It's all four strings put together. I have also tried without the space between "br" and "/", and get the same result. If I do resultAggregator.split("br"); then donateItems[0] is just the first word: "Now".
I suspect the problem is the Jsoup method select is stripping the tags out?
Any suggestions? I can't change the website's html. I have to work with it as is.
Try this:
//check results
if (nodePhysDon.size()> 0) {
//use toString() to get the selected block with tags included
donateResult = nodePhysDon.get(0).toString();
resultAggregator = donateResult;
}
if (resultAggregator != "") {
// remove <strong> and </strong> tags
resultAggregator = resultAggregator.replace("<strong>", "");
resultAggregator = resultAggregator.replace("</strong>", "");
//then split with <br>
String donateItems[] = resultAggregator.split("<br>");
}
Make sure to split with <br> and not <br />
Related
I have this regular expression:
(\S+)=[""']?((?:.(?![""']?\s+(?:\S+)=|[>""']))+.)[""']?
This regex expression will extract the name of the tag and the value from HTML string, everything is working fine, but, when I have a single char the regex will trap the left side quote and the character.
This is my string:
<select title="Campo" id="6:7" style="width: auto; cursor: pointer;" runat="server" controltype="DropDownList" column="Dummy_6"><option value="0">Value:0</option><option selected="selected" value='1'>Value:1Selected!</option></select>
I don't know how to modify this regex expression to capture the char correctly even there is only one character.
You should be using HTML parser for this task, regex cannot handle HTML properly.
To collect all tag names and there attribute names and values, I recommend the following HtmlAgilityPack-based solution:
var tags = new List<string>();
var result = new List<KeyValuePair<string, string>>();
HtmlAgilityPack.HtmlDocument hap;
Uri uriResult;
if (Uri.TryCreate(html, UriKind.Absolute, out uriResult) && uriResult.Scheme == Uri.UriSchemeHttp)
{ // html is a URL
var doc = new HtmlAgilityPack.HtmlWeb();
hap = doc.Load(uriResult.AbsoluteUri);
}
else
{ // html is a string
hap = new HtmlAgilityPack.HtmlDocument();
hap.LoadHtml(html);
}
var nodes = hap.DocumentNode.Descendants().Where(p => p.NodeType == HtmlAgilityPack.HtmlNodeType.Element);
if (nodes != null)
foreach (var node in nodes)
{
tags.Add(node.Name);
foreach (var attribute in node.Attributes)
result.Add(new KeyValuePair<string, string>(attribute.Name, attribute.Value));
}
I think you're trying something overly intricate and, ultimately, incorrect, with your regex.
If you want to naively parse an HTML attribute: this regex should do the trick:
(\S+)=(?:"([^"]+)"|'([^']+)')
Note that it parses single-quoted and double-quoted values in different legs of the regex. Your regex would find that in the following code:
<foo bar='fu"bar'>
the attribute's value is fu when it really is fu"bar.
There are better ways to parse HTML, but here's my take at your question anyway.
(?<attr>(?<=\s).+?(?==['"]))|(?<val>(?<=\s.+?=['"]).+?(?=['"]))
Without capture group names:
((?<=\s).+?(?==['"]))|((?<=\s.+?=['"]).+?(?=['"]))
quotes included:
((?<=\s).+?(?==['"]))|((?<=\s.+?=)['"].+?['"])
Update: For more in-depth usage, do give HTML Agility Pack a try.
I'm working on an HTML page highlighter project but ran into problems when a search term is a name of an HTML tag metadata or a class/ID name; eg if search terms are "media OR class OR content" then my find and replace would do this:
<link href="/css/DocHighlighter.css" <span style='background-color:yellow;font-weight:bold;'>media</span>="all" rel="stylesheet" type="text/css">
<div <span style='background-color:yellow;font-weight:bold;'>class</span>="container">
I'm using Lucene for highlighting and my current code (sort of):
InputStreamReader xmlReader = new INputStreamReader(xmlConn.getInputStream(), "UTF-8");
if (searchTerms!=null && searchTerms!="") {
QueryScorer qryScore = new QueryScorer(qp.parse(searchTerms));
Highlighter hl = new Highlighter(new SimpleHTMLFormatter(hlStart, hlEnd), qryScore);
}
if (xmlReader!=null) {
BufferedReader br = new BufferedReader(xmlReader);
String inputLine;
while((inputLine = br.readLine())!=null) {
String tmp = inputLine.trim();
StringReader strReader = new stringReader(tmp);
HTMLStripCharFilter htm = HTMLStripCharFilter(strReader.markSupported() ? strReader : new BufferedReader(strReader));
String tHL = hl.getBestFragment(analyzer, "", htm);
tmp = (tHL==null ? tmp : tHL);
}
xmlDoc+=tmp;
}
bufferedReader.close()
As you can see (if you understand Lucene highlighting) this does an indiscriminate find/replace. Since my document will be HTML and the search terms are dictated by users there is no way for me to parse on certain elements or tags. Also, since the find/replace basically loops and appends the HTML to a string (the return type of the method) I have to keep all HTML tags and values in place and order. I've tried using Jsoup to loop through the page but handles the HTML tag as one big result. I also tried tag soup to remove the broken HTML caused by the problem but it doesn't work correctly. Does anyone know how to basically loop though the elements and node (data value) of html?
I've been having the most luck with this
StringBuilder sb = new StringBuilder();
sb.append("<?xml version=\"1.0\" enconding=\"UTF-8\"?><!DOCTYPE html>");
Document doc = Jsoup.parse(txt.getResult());
Element elements = doc.getAllElements();
for (Element e : elements) {
if (!(e.tagName().equalsIgnoreCase("#root"))) {
sb.append("<" + e.tagName() + e.attributes() + ">" + e.ownText() + "\n");
}// end if
}// end for
return sb;
The one snag I still get is the nesting isn't always "repaired" properly but still semi close. I'm working more on this.
I am downloading a web page and I am trying to extract some values from it.
The places of the page that I am interested in are of this type:
<a data-track=\"something\" href=\"someurl\" title=\"Heaven\"><img src=\"somesource.jpg\" /></a>
and I need to extract the href (someurl) value. Note that there are multiple entries like the one above in the HTML string that I have and thus I will use a list to store all the URLs that I extract from the string.
This is what I've tried so far:
QString html_str=myfile();
QRegExp regex("<a data-track\\=\"something\" href\\=\".*(?=\" title)");
if(regex.indexIn(html_str) != -1){
QStringList list;
QString str;
list = regex.capturedTexts();
foreach(str,list)
qDebug() << str.remove("<a data-track=\"something\" href=\"");
}
With the above code I get only one occurrence (list.count() == 1) which contains the whole HTML string from the first occurrence of someurl till the end of the file, without the <a data-track="something" href="" in it, which have all been removed.
I'd do it like this: (make sure you double check your regex)
QRegExp regex("<a data-track=\"something\" href=\".*(?=\" title)");
if (regex.indexIn(html_str) != -1) qDebug() << html_str.cap().remove(<a data-track=\"something\" href=\");
You can use a while loop to control the position of the "html_str"
pos = regex.indexIn(htmlContent); // get the first position
while(pos = regex.indexIn(htmlContent, pos) != -1){ // continue next
QStringList list;
list = regex.capturedTexts();
foreach(QString url, list) {
// do something
}
pos += regex.matchedLength();
}
I am looking for a way to replace keywords within a html string with a variable. At the moment i am using the following example.
returnString = Replace(message, "[CustomerName]", customerName, CompareMethod.Text)
The above will work fine if the html block is spread fully across the keyword.
eg.
<b>[CustomerName]</b>
However if the formatting of the keyword is split throughout the word, the string is not found and thus not replaced.
e.g.
<b>[Customer</b>Name]
The formatting of the string is out of my control and isn't foolproof. With this in mind what is the best approach to find a keyword within a html string?
Try using Regex expression. Create your expressions here, I used this and it works well.
http://regex-test.com/validate/javascript/js_match
Use the text property instead of innerHTML if you're using javascript to access the content. That should remove all tags from the content, you give back a clean text representation of the customer's name.
For example, if the content looks like this:
<div id="name">
<b>[Customer</b>Name]
</div>
Then accessing it's text property gives:
var name = document.getElementById("name").text;
// sets name to "[CustomerName]" without the tags
which should be easy to process. Do a regex search now if you need to.
Edit: Since you're doing this processing on the server-side, process the XML recursively and collect the text element's of each node. Since I'm not big on VB.Net, here's some pseudocode:
getNodeText(node) {
text = ""
for each node.children as child {
if child.type == TextNode {
text += child.text
}
else {
text += getNodeText(child);
}
}
return text
}
myXml = xml.load(<html>);
print getNodeText(myXml);
And then replace or whatever there is to be done!
I have found what I believe is a solution to this issue. Well in my scenario it is working.
The html input has been tweaked to place each custom field or keyword within a div with a set id. I have looped through all of the elements within the html string using mshtml and have set the inner text to the correct value when a match is found.
e.g.
Function ReplaceDetails(ByVal message As String, ByVal customerName As String) As String
Dim returnString As String = String.Empty
Dim doc As IHTMLDocument2 = New HTMLDocument
doc.write(message)
doc.close()
For Each el As IHTMLElement In doc.body.all
If (el.id = "Date") Then
el.innerText = Now.ToShortDateString
End If
If (el.id = "CustomerName") Then
el.innerText = customerName
End If
Next
returnString = doc.body.innerHTML
return returnString
Thanks for all of the input. I'm glad to have a solution to the problem.
How would I use the HTML Agility Pack to get the First Paragraph of text from the body of an HTML file. I'm building a DIGG style link submission tool, and want to get the title and the first paragraph of text. Title is easy, any suggestions for how I might get the first paragraph of text from the body? I guess it could be within P or DIV depending on the page.
Is this html that you control? If so, you could give the p an id or a class and find it via
//p[#id=\"YOUR ID\"] or //p[#class=\"YOUR CLASS\"]
EDIT:
Since you don't control the html, maybe the below will work. It takes all the HtmlTextNodes and tries to find a grouping of text greater than the threshold specified. It's far from perfect but might get you going in the right direction.
String summary = FindSummary(page.DocumentNode);
private const int THRESHOLD = 50;
private String FindSummary(HtmlAgilityPack.HtmlNode node) {
foreach (HtmlAgilityPack.HtmlNode childNode in node.ChildNodes) {
if (childNode.GetType() == typeof(HtmlAgilityPack.HtmlTextNode)) {
if (childNode.InnerText.Length >= THRESHOLD) {
return childNode.InnerText;
}
}
String summary = FindSummary(childNode);
if (summary.Length >= THRESHOLD) {
return summary;
}
}
return String.Empty;
}
The agility pack uses xpath for querying the html load you just use a simple xpath statement. Something like...
HtmlDocument htmldoc = new HtmlDocument();
htmldoc.LoadHtml(content);
HtmlNodeCollection firstParagraph = htmldoc.DocumentNode.SelectNodes("//p[1]");