I'm trying to write a crawler that gets raw html data and finds Title, price, update date, photo etc... fields and writes it to database. This is an classic and old way to crawl data.
I think that I can do this job wit an other way.
If I crawl all pages (may be more than 1000) in the web site, and compare them all I can find the specific areas.
I mean html tags will be always the same. Only specific areas will change like title, image etc...
So, what is the best way to determine changed areas?
compare them all I can find the spesific areas
what is the best way to determine changed areas?
In your question you set the scrapeing/crawling approach of comparing pages' parts and getting the data of specific areas. This smells with regex approach. Do not use it as the very non-efficient approach. Rather use xpath, operating on XML structures.
So, be simple:
Get html
Make it DOM
Make DOM a valid XML
Apply xPath queries to XML
Believe me, xml libraries are well able to handle huge structures (including idle html tags) and traverse over them. A classical example of using xpath is in this post of mine.
To determine data node paths you just use web inspector tools (F12 - in Chrome and IE and Ctrl+Shift+I in FF) to see the html tags containing useful info.
Related
I'm having trouble wrapping my head around using XML as complement to HTML. I know what they are used for but I don't quite understand how to use them together.
I know that you can use JavaScript to convert an XML file to HTML, but I don't get how that's going to do the trick. How would I be able to style this HTML-file?
I have a template form, which I want to be accessible on a server and for which I want to enable edits. Once edited I want to save the edits on a separate file, so that the template is still available.(Just so you guys have a little bit of background regarding what I need this for).
After a lot of research I came to the conclusion that I would need to use XML, as I will have to store and transport data.
Could anyone explain in more detail how exactly XML can be used as a complement to HTML?
If you need more details or information please let me know. I did do a lot of research and I read the other posts regarding how to convert XML to HTML with JavaScript, but that doesn't answer my question about how EXACTLY they complement each other.
I guess my problem here is that I have yet to manage to wrap my head around the concept.
XML is related to HTML, as it uses the same magic characters for its markup and the same logic where to put the data.
The characters <> are used to separate the markups from the content.
The character & together with an entity code like < is used to encode characters, which would lead to troubles otherwise
elements can contain attributes like <someElement someAttribute="attr value">
elements can contain text or sub elements
The big difference is, that XML is absolutely free how you name your elements and attributes, while HTML relys on dedicated names (like <body>), whereas XML is absolutely strict in structure while HTML allows a lot (like unclosed tags).
As a thing in the middle there is XHTML, which is as strict as XML but sticks to the rules of HTML.
It is almost impossible to read HTML as XML, but you can easily create XML which is taken by any browser as a valid web page.
Your issue cries for XSLT. This is a method to transform a given XML into a new format. This allows for example, to export your data as XML and create a nice web page from it. Different XSLT will present the same data in different ways.
There are several online tools to test this feature. you might have a look here.
Your statement After a lot of research I came to the conclusion that I would need to use XML, as I will have to store and transport data is not all clear... How you send data (to a web application), and the way you send the (manipulated) data back, is not bound to XML. This is very often done with JSON, using Java Script to read, edit and send it back.
XML -> XSLT - HTML is often seen to create (rather static) reports for a web viewer
Let's say I have several HTML pages from unrelated websites, but that contain the same overall information. I want to extract that information in a flexible manner, i.e. I want to only have to write a small number of data extractors for all of the pages (ideally, one). Say the fields are (to use a blog example) author, date, title, text. The classes of the HTML tags that denote these could be totally different for each page, but still display on the page in roughly the same way. For example, take this post from CNN and this post from Gawker. Both contain the same information - the information that I want - somewhere on the page when it is actually displayed. Is there a nice way to extract that data? Writing separate extractors is an option, but not a good one; there are about a thousand styles of documents in the dataset I want to use.
The only way you can do that is by finding a common element in all of those websites (e.g. they share the same DOM structure, or have the same ID, or are preceded by the same content in a previous tag like an <h1>).
Otherwise, you need to write different rules or regular expressions for each case.
Unless, of course, you write an algorithm so intelligent that is capable of recognizing the content intention/meaning even with different HTML - which is not simple nor quick to write in any way.
Is there an elegant way to get the computed style for each DOM node in a web page, for a large number of files, in order to compare style data for similar nodes across those files?
I'm working on a large number of HTML files (> 500) containing pretty broken HTML from MS FrontPage, trying to extract style data and convert it to semantic markup. I managed to do this using regex up to a certain point, but now it's become too complex. I learned that it's a bad idea to parse HTML using regex in the first place, so I'm trying to find a way to have the browser parse the HTML and give me the computed style for each node on the page.
I know I can access the DOM and get the computed style for each node using JavaScript, but I can only do this for one file at a time, and there is no easy way to compare this data across several files, or is there? If I'm not mistaken, it's not possible to dump data from JavaScript to a file. What alternatives would there be?
(BTW. I've tried to use HTMLTidy, but the HTML is so borked that it crashes.)
You could try using this: https://github.com/peol/jquery-computed-style
When writing filters for the Firefox Add-On 'Adblock Plus' you can write rules to completely remove certain HTML elements from the page, but filtering criteria is in fact limited to a handful of things, like class and id names and attribute values.
What I was hoping for is say a Firefox Add-On which would pass the HTML for a page to some arbitrary process you specify, where this process could reconstitute the HTML for the entire page in any arbitrary way and then have the browser display that. Is there a Firefox add-on that allows this or is this sort of operation commonly accomplished by some entirely different but well-known means (and perhaps not browser-specific).
Wouldn't this allow you to augment pages coming from some website to your browser with arbitrary new features, maybe from an entirely different website.
You are looking for Greasemonkey.
From time to time, I see web pages whose content is solely written in XML (not HTML or XHTML). These pages usually have some style sheets (either XSLT or CSS) attached to them which makes them look like any other ordinary web page.
My question is, what are the advantages of such an approach (if any), and why would anyone choose to work this way?
EDIT: If this is a good thing, why is it not widespread?
EDIT 2: Thanks everyone for the great responses. They really enlightened me. I also found this question whose content is also related.
It's easier to generate it programmatically and reuse it for other purposes than displaying as webpage.
Update:
EDIT: If this is a good thing, why is it not widespread?
Not everyone needs to generate it programmatically or reuse it for other purposes than displaying as webpage. It's then easier to use plain HTML.
One possible advantage would be for use of the data of the page in something other than a web browser; that would (presumably) be easier to do if a page's content were well-formed XML. Of course in theory a well-formed, semantic XHTML page should be nearly as able to be parsed, as well.
It can also be easier to generate XML instead of XHTML, depending on the data source.
When you are getting XML data in to your system, and you are supposed to present this XML data then it is much easier to write some XSLT for that XML instead of parsing it using some sort of parser and then presenting the data.
That can be a valid point for using XML instead of XHTML or HTML
Update
To answer your question on why this is not widespread, is because XSTL is tedious and hard to work with. Specifically XPath, which can be for some people quite difficult to use.
Those pages use XSLT to get rendered on the client side. Not every browser (especially older ones) supports rendering XML + XSLT. XML can however be used server-side as template and get transformed to HTML by the application running on the server. I personally don't see any advantages to this approach.
There are a lot more web pages that are written solely in XML than you know. You're only seeing the ones that do the XSLT transformation on the client side. Server-side transformation of XML is not at all unusual, because there's a plethora of things that produce data in XML, and transforming XML to HTML in XSLT is straightforward. You'll never know this is happening if you just look at the HTML, which bears no signs of having been generated via XSLT.
Personally, I don't understand it either though one of the biggest problems is support in IE. I created a skeleton ecommerce site serving XML, transformed by XSLT and styled using CSS. I sorely missed the ability to use XLink and other wonderful XML features. It's also nice to be able to tag the data for what it is. I used a 'menu' tag for the restaurant menus. 'price' tags for prices and so on. If a user clicked on a link to change menus, all I had to do was send the name of the item, the price and the description instead of the complete page. iirc, a 4K or more HTML menu page was only 200 bytes of sent data.
As far as the "one error makes everything crash in XML" type comments, the same is true of any programming language so proper coding should be no bother for programmers and careful HTML/CSS types.
Before anyone says that what I did was actually XHTML...no. I served XML. I did call up XHTML namespaces when needed for links, images and HTML type things but only when necessary.