The XPath selector in Scrapy shell response.xpath('//div[#class="chr-lot-header__bid-details"]//span[#class="chr-lot-header__value-field"] returns an empty list while the same XPath selector selects the right html tag in the "Elements" tab of my Chrome browser.
Here's the website the XPath selector is intended for:
https://www.christies.com/en/lot/lot-5973059
The output I want the XPath selector to produce is "GBP 11,282,500".
I just checked the website you mentioned in your mentioned is getting the required information dynamically loaded, which means it can not be scrapped directly. Because scrapy only scrap statically available data not dynamically loaded data. To scrap dynamically loaded data either you need to mimic real time browser like you can use selenium/playwright and integrate there library inside your scrapy code or you can try to find API calls in network tab, which this required data is being loaded/fetched.
Long time automation developer here (just for context).
It's been bugging me for quite a while that the dev tools in chrome used to find elements just don't seem to work as I expect. Hopefully someone can point out what I'm doing wrong.
Looking at , say, sauce labs page: https://saucelabs.com/blog/selenium-tips-finding-elements-by-their-inner-text-using-contains-a-css-pseudo-class
ok now that page has div's and anchors
and indeed I can do find ('a') or find('div')
but why do I have a problem using classes or id's ?
The find() method refers to window.find(), a non-standard API for the browser's built-in Find function. It does not find web elements the same way Selenium or Capybara do, and so it does not parse the input as a selector.
You find elements with selectors in Chrome DevTools using document.querySelector() or document.querySelectorAll(). There are no special methods in Chrome DevTools for this, however it does provide the $() and $$() aliases (respectively) to save you time and keystrokes.
You can use jquery code in chrome console, for example if you want to find something with class of "foo" you can write $('.foo') or a id of "bar" you write $('#bar')
You can read all about it here
Also you can just google what you want "Jquery how to find a div with id"
I am trying to parse the table from the URL (Google finance) below
http://www.google.com/finance/historical?q=BOM:533278
I am trying to extract only the close values in the close column. But when i try with the XPATH
hd.DocumentNode.SelectSingleNode("//td[#class='rgt']")
I am getting all the nodes of having attribute as class and value of attribute as rgt in one Node.innerText itself.
I need the values one by one, not all at the same time. I must be doing something silly here. Thank you.
Actual XPath found using Firebug is a follows
/html/body/div/div/div[3]/div[2]/div/div[2]
/div[2]/div/form/div[2]/table/tbody/tr[2]/td[5]
But some how after the form tag...HTMLagility pack is returning null node. Never thought this would take so long to implement.
If you're using Firebug or any Firefox extension (like XPather) to obtain the XPath of the elements you need to parse, you might need to remove the tbody tags from the XPath.
Take a look at the following answer here on SO: Why does firebug add <tbody> to <table>?
If you're using HtmlAgilityPack, the XPath returned by Firebug or by any other tool related with Firefox may differ, because the HTML source you're parsing can be different from the HTML source in Firefox.
Sometimes might be useful to open the same page in Internet Explorer 8 and using Developer Tools (F12) do the same you're doing with Firebug, or if not, use another tool like HAP Explorer that can be downloaded from the HtmlAgilityPack page
There are many ways to do it. Here is one solution, which is based on the Data td (the one withe the 'lm' class):
HtmlAgilityPack.HtmlDocument doc = new HtmlAgilityPack.HtmlDocument();
... load the doc ...
foreach (HtmlNode node in doc.DocumentNode.SelectNodes("//td[#class='lm']/../td[5]"))
{
Console.WriteLine("node=" + node.InnerText);
}
XPath for the first cell in Close column is //div[#id='prices']/table/tbody/tr[2]/td[5] and for the second one it's //div[#id='prices']/table/tbody/tr[3]/td[5] and so on.
I'm looking for a tool that will give me the proper generated source including DOM changes made by AJAX requests for input into W3's validator. I've tried the following methods:
Web Developer Toolbar - Generates invalid source according to the doc-type (e.g. it removes the self closing portion of tags). Loses the doctype portion of the page.
Firebug - Fixes potential flaws in the source (e.g. unclosed tags). Also loses doctype portion of tags and injects the console which itself is invalid HTML.
IE Developer Toolbar - Generates invalid source according to the doc-type (e.g. it makes all tags uppercase, against XHTML spec).
Highlight + View Selection Source - Frequently difficult to get the entire page, also excludes doc-type.
Is there any program or add-on out there that will give me the exact current version of the source, without fixing or changing it in some way? So far, Firebug seems the best, but I worry it may fix some of my mistakes.
Solution
It turns out there is no exact solution to what I wanted as Justin explained. The best solution seems to be to validate the source inside of Firebug's console, even though it will contain some errors caused by Firebug. I'd also like to thank Forgotten Semicolon for explaining why "View Generated Source" doesn't match the actual source. If I could mark 2 best answers, I would.
Justin is dead on. The key point here is that HTML is just a language for describing a document. Once the browser reads it, it's gone. Open tags, close tags, and formatting are all taken care of by the parser and then go away. Any tool that shows you HTML is generating it based on the contents of the document, so it will always be valid.
I had to explain this to another web developer once, and it took a little while for him to accept it.
You can try it for yourself in any JavaScript console:
el = document.createElement('div');
el.innerHTML = "<p>Some text<P>More text";
el.innerHTML; // <p>Some text</p><p>More text</p>
The un-closed tags and uppercase tag names are gone, because that HTML was parsed and discarded after the second line.
The right way to modify the document from JavaScript is with document methods (createElement, appendChild, setAttribute, etc.) and you'll observe that there's no reference to tags or HTML syntax in any of those functions. If you're using document.write, innerHTML, or other HTML-speaking calls to modify your pages, the only way to validate it is to catch what you're putting into them and validate that HTML separately.
That said, the simplest way to get at the HTML representation of the document is this:
document.documentElement.innerHTML
[updating in response to more details in the edited question]
The problem you're running into is that, once a page is modified by ajax requests, the current HTML exists only inside the browser's DOM-- there's no longer any independent source HTML that you can validate other than what you can pull out of the DOM.
As you've observed, IE's DOM stores tags in upper case, fixes up unclosed tags, and makes lots of other alterations to the HTML it got originally. This is because browsers are generally very good at taking HTML with problems (e.g. unclosed tags) and fixing up those problems to display something useful to the user. Once the HTML has been canonicalized by IE, the original source HTML is essentially lost from the DOM's perspective, as far as I know.
Firefox most likley makes fewer of these changes, so Firebug is probably your better bet.
A final (and more labor-intensive) option may work for pages with simple ajax alterations, e.g. fetching some HTML from the server and importing this into the page inside a particular element. In that case, you can use fiddler or similar tool to manually stitch together the original HTML with the Ajax HTML. This is probably more trouble than it's worth, and is error prone, but it's one more possibility.
[Original response here to the original question]
Fiddler (http://www.fiddlertool.com/) is a free, browser-independent tool which works very well to fetch the exact HTML received by a browser. It shows you exact bytes on the wire as well as decoded/unzipped/etc content which you can feed into any HTML analysis tool. It also shows headers, timings, HTTP status, and lots of other good stuff.
You can also use fiddler to copy and rebuild requests if you want to test how a server responds to slightly different headers.
Fiddler works as a proxy server, sitting in between your browser and the website, and logs traffic going both ways.
I know this is an old post, but I just found this piece of gold. This is old (2006), but still works with IE9. I personnally added a bookmark with this.
Just copy paste this in your browser's address bar:
javascript:void(window.open("javascript:document.open(\"text/plain\");document.write(opener.document.body.parentNode.outerHTML)"))
As for firefox, web developper tool bar does the job. I usually use this, but sometimes, some dirty 3rd party asp.net controls generates differents markups based on the user agent...
EDIT
As Bryan pointed in the comment, some browser remove the javascript: part when copy/pasting in url bar. I just tested and that's the case with IE10.
If you load the document in Chrome, the Developer|Elements view will show you the HTML as fiddled by your JS code. It's not directly HTML text and you have to open (unfold) any elements of interest, but you effectively get to inspect the generated HTML.
In the Web Developer Toolbar, have you tried the Tools -> Validate HTML or Tools -> Validate Local HTML options?
The Validate HTML option sends the url to the validator, which works well with publicly facing sites. The Validate Local HTML option sends the current page's HTML to the validator, which works well with pages behind a login, or those that aren't publicly accessible.
You may also want to try View Source Chart (also as FireFox add-on). An interesting note there:
Q. Why does View Source Chart change my XHTML tags to HTML tags?
A. It doesn't. The browser is making these changes, VSC merely displays what the browser has done with your code. Most common: self closing tags lose their closing slash (/). See this article on Rendered Source for more information (archive.org).
Using the Firefox Web Developer Toolbar (https://addons.mozilla.org/en-US/firefox/addon/60)
Just go to View Source -> View Generated Source
I use it all the time for the exact same thing.
I had the same problem, and I've found here a solution:
http://ubuntuincident.wordpress.com/2011/04/15/scraping-ajax-web-pages/
So, to use Crowbar, the tool from here:
http://simile.mit.edu/wiki/Crowbar (now (2015-12) 404s)
wayback machine link:
http://web.archive.org/web/20140421160451/http://simile.mit.edu/wiki/Crowbar
It gave me the faulty, invalid HTML.
This is an old question, and here's an old answer that has once worked flawlessly for me for many years, but doesn't any more, at least not as of January 2016:
The "Generated Source" bookmarklet from SquareFree does exactly what you want -- and, unlike the otherwise fine "old gold" from #Johnny5, displays as source code (rather than being rendered normally by the browser, at least in the case of Google Chrome on Mac):
https://www.squarefree.com/bookmarklets/webdevel.html#generated_source
Unfortunately, it behaves just like the "old gold" from #Johnny5: it does not show up as source code any more. Sorry.
In Firefox, just ctrl-a (select everything on the screen) then right click "View Selection Source". This captures any changes made by JavaScript to the DOM.
alert(document.documentElement.outerHTML);
Check out "View Rendered Source" chrome extension:
https://chrome.google.com/webstore/detail/view-rendered-source/ejgngohbdedoabanmclafpkoogegdpob/
Why not type this is the urlbar?
javascript:alert(document.body.innerHTML)
In the elements tab, right click the html node > copy > copy element - then paste into an editor.
As has been mentioned above, once the source has been converted into a DOM tree, the original source no longer exists in the browser. Any changes you make will be to the DOM, not the source.
However, you can parse the modified DOM back into HTML, letting you see the "generated source".
In Chrome, open the developer tools and click the elements tab.
Right click the HTML element.
Choose copy > copy element.
Paste into an editor.
You can now see the current DOM as an HTML page.
This is not the full DOM
Note that the DOM cannot be fully represented by an HTML document. This is because the DOM has many more properties than the HTML has attributes. However this will do a reasonable job.
I think IE dev tools (F12) has; View > Source > DOM (Page)
You would need to copy and paste the DOM and save it to send to the validator.
Only thing i found is the BetterSource extension for Safari this will show you the manipulated source of the document only downside is nothing remotely like it for Firefox
The below javascript code snippet will get you the complete ajax rendered HTML generated source. Browser independent one. Enjoy :)
function outerHTML(node){
// if IE, Chrome take the internal method otherwise build one as lower versions of firefox
//does not support element.outerHTML property
return node.outerHTML || (
function(n){
var div = document.createElement('div'), h;
div.appendChild( n.cloneNode(true) );
h = div.innerHTML;
div = null;
return h;
})(node);
}
var outerhtml = outerHTML(document.getElementsByTagName('html')[0]);
var node = document.doctype;
var doctypestring="";
if(node)
{
// IE8 and below does not have document.doctype and you will get null if you access it.
doctypestring = "<!DOCTYPE "
+ node.name
+ (node.publicId ? ' PUBLIC "' + node.publicId + '"' : '')
+ (!node.publicId && node.systemId ? ' SYSTEM' : '')
+ (node.systemId ? ' "' + node.systemId + '"' : '')
+ '>';
}
else
{
// for IE8 and below you can access doctype like this
doctypestring = document.all[0].text;
}
doctypestring +outerhtml ;
I was able to solve a similar issue by logging the results of the ajax call to the console. This was the html returned and I could easily see any issues that it had.
in my .done() function of my ajax call I added console.log(results) so I could see the html in the debugger console.
function GetReversals() {
$("#getReversalsLoadingButton").removeClass("d-none");
$("#getReversalsButton").addClass("d-none");
$.ajax({
url: '/Home/LookupReversals',
data: $("#LookupReversals").serialize(),
type: 'Post',
cache: false
}).done(function (result) {
$('#reversalResults').html(result);
console.log(result);
}).fail(function (jqXHR, textStatus, errorThrown) {
//alert("There was a problem getting results. Please try again. " + jqXHR.responseText + " | " + jqXHR.statusText);
$("#reversalResults").html("<div class='text-danger'>" + jqXHR.responseText + "</div>");
}).always(function () {
$("#getReversalsLoadingButton").addClass("d-none");
$("#getReversalsButton").removeClass("d-none");
});
}