I am working on a project where I want to scrape a page like this, in order to get the city of origin. I tried to use the css selector: ".type-12~ .type-12+ .type-12" However I do not get the text into R.
Link:
https://www.kickstarter.com/projects/1141096871/support-ctrl-shft/description
I use rvest and and the read_html function.
However, it seems that the source has some scripts in it. Is there a way to scrape the website after the scripts have returned their results (as you see it with a browser)?
PS I looked at similar questions but did find the answer..
Code:
main.names <- read_html(x = paste0("https://www.kickstarter.com/projects/1141096871/support-ctrl-shft/description")) # feed `main.page` to the next step
names1 <- main.names %>% # feed `main.page` to the next step
html_nodes("div.mb0-md") %>% # get the CSS nodes
html_text()# extract the text
You should not do it. They provide a API which you can find here: https://status.kickstarter.com/api
Using APIs or Ajax/JSON calls is usually better since
The server isn't overused because your scraper visits every link it can find causing unnecessary traffic. That is bad for the speed of your program and bad for the servers of the site you are scraping.
You don't have to worry about that they changed a class name or id and your code won't work anymore
Especially the second part should interest you since it can take hours finding which class isn't returning a value anymore.
But to answer your question:
When you use the right scraper you can find all what you want. What tools are you using? There are possibilities to get data before the site is loaded or after. You can execute the JS on the site separately and find hidden content or find things like display:none Css classes...
It really depends on what you are using and how you use it.
Related
I need to scrape this page to get the value of the comment, as well as the Document and Submitter Information on the right side..
https://www.regulations.gov/document?D=FDA-2014-N-1207-7673
I've tried using read_html() and read_xml() from the xml2 package with no luck. I've tried getURLContent() followed by xmlParse() and htmlParse() from RCurl.
I even tried simply readLines(), which does not actually get me the content of the website.
I suppose I don't have a great understanding how this all works. Previous websites I have always been able to scrape with simply html_parse(), html_nodes() and html_attr(). How can I accomplish scraping this website?
I'm trying to scrape or obtain the text of Disqus comments from an online local newspaper using RSelenium in Chrome but am finding the going a little tough for my capabilities. I have searched many places but did not find the right information or I am using the wrong search terms (most probably).
So far I have managed to get the "normal" html from the pages but cannot pinpoint the right class, css selector or id to get the Disqus comments. I have also tried Selectorgadget but this only points to #dsq-app2 which selects the whole Disqus area at once and does not allow to select smaller parts of the area. I tried the same with RSelenium using elems <- mybrowser$findElement(using = "id", "dsq-app2") with an "environment" being stored in elems. Then I tried to find child elements within elems but came up blank.
Viewing the page via developer tools I can see that the interesting stuff is within an iframe called #dsq-app2 and have managed to extract all its source through the elems$getPageSource() after switching to the frame using elems$switchToFrame("dsq-app2"). This outputs all the html as one big "dirty" chunk and short of searching for the required stuff held in <p> tags and other elements of interest such as poster's usernames in data-role="username" and others, I don't seem to find the right way forward.
I have also tried using the advice given here but the Disqus setup is a little different. One of the pages I'm trying is this with the bulk of the comments area within a section called conversation and a ton of other id's such as posts and the un-ordered list with the id=post-list that ultimately carries the comments I need to scrape.
Any ideas or help tips are most welcome and received with thanks.
After a lot of testing and experimenting I managed. I don't know if it's the cleanest or prettiest solution but it works. Hope others will find it useful. Basically what I did was to find the url that points to the comments only. This is found within the "dsq-app2" iframe and is an attribute called src. At first I was also switching to the iframe but found that this works without.
remDr$navigate("toTheRequiredPage")
elemsource <- remDr$findElement(using = "id", value = "dsq-app2")
src <- elemsource$getElementAttribute("src") # find the src attribute within the iframe`
remDr$navigate(src[[1]]) # navigate to the src url
# find the posts from the new page
elem <- remDr$findElement(using = "id", value = "posts")
elem.posts <- elem$findChildElements(using = "id", value = "post-list")
elem.msgs <- elem.posts[[1]]$findChildElements(using = "class name", value = "post-message")
length(elem.msgs)
msgtext <- elem.msgs[[1]]$getElementText() # find first post's text
msgtext # print message
Update: I found out that if I use remDr$switchToFrame("dsq-app2") I do not need to use the src url as I have explained above. So there are actually two ways of scraping;
Use switchToFrame("nameOfFrame") or
Use my prior solution of using the src URL from the iframe
Hope this makes it clearer.
I've been playing around with scraping webpages using BeautifulSoup for a few weeks now. An issue I recently ran into, and hadn't seen before is where the content of the webpage is different from what's show as the page's source code and what's given in the url request response.
For example, let's look at yelp. This (http://www.yelp.com/search?find_desc=&find_loc=Pittsburgh%2C+PA%2C+USA&ns=1) will bring up all 63k businesses in the Pittsburgh, PA area. If we look at the pages source, we see that it matches the content (if you search for the word showing it finds the code below.)
<span class="pagination-results-window">
Showing 1-10 of 63936
</span>
Now, let's only look at restaurants in the Pittsburgh, PA area. This reduces the number of returned results from 63k to 5k. However, if we look at the pages source, the same code shown above is seen. Moreover, the first returned result in the page source matches the 63k page, not the 5k page. At first, I thought this might be due to mozilla caching webpage content but quickly nixed this idea by scraping the link for the 5k restaurants (http://www.yelp.com/search?find_desc=&find_loc=Pittsburgh%2C+PA%2C+USA&ns=1#cflt=restaurants). The result showed that it collected html that generated the page with 63k businesses, not the 5k restaurants that I was expecting.
My question is what is causing this? Is this done intentially by Yelp or this caused by an external reason? I've tried looking this up on my own but I'm unable to find anything that explains this using the verbiage in this question's title.
Let me know if you need more details, I'm happy to provide the few more lines of code that I left out.
Thanks!
Yelp, like many responsive sites, uses AJAX to fetch more data and/or jQuery to perform filtering. Scraping can only pull the base HTML before any jQuery or AJAX updates are performed.
Both of these URLs are most likely the same to server-side code:
search?find_desc=&find_loc=Pittsburgh%2C+PA%2C+USA&ns=1
search?find_desc=&find_loc=Pittsburgh%2C+PA%2C+USA&ns=1#cflt=restaurants
That is why you see the same scraped results in both cases. However, the fragment #cflt=restaurants is used by client-side JavaScript and kicks off some script to filter results.
I am VERY new to web scraping in any shape or form, I've been trying to get into Python and I heard that web scraping was a good way to expose myself to Python. So, after many Google searches I finally came down to the use of two highly recommended modules: Requests and BeautifulSoup. I've read up a fair amount on both and have a basic understanding on how to use them.
I found a very basic website (basic in that there isn't much content or javascript and the like, making parsing the HTML a lot easier) and I have the following code:
import requests
from bs4 import BeautifulSoup
soup = BeautifulSoup(requests.get('http://www.basicwebs.co.uk/contact.htm').text)
for row in soup('div',{'id': 'Layer1'})[0].h2('font'):
tds = row.text
print tds
This code works. It produces the following result:
BASIC
WEBS
Contact details
Contact details
Which, if you spend a few minutes inspecting the code on this page, is the correct result (I assume). Now, the thing is, while this code works, what if I wanted to get a different part of the page? Like the little paragraph on the page that states "If you are interested in having a website designed and hosted by us, please contact us either by e-mail or telephone." - my understanding would be to simply change the index number to the corresponding header that this text is found under, but when I change it I get a message that the list index is out of range.
Can anybody help? (as simple as you can make it, if possible)
I'm using Python 2.7.8
The text you require surrounded by the font tag with an attribute size=3, so one way to do it is by selecting the first occurrence of it like this:
font_elements = soup('font', {'size': 3})
if font_elements:
print font_elements[0].text
RESULT:
If you are interested in having a website designed
and hosted by us, please contact us either by e-mail or telephone.
You can directly do this :
soup('font',{'size': '3'})[0].text
However, I want to draw your attention towards the mistake you made before.
soup('div',{'id': 'Layer1'})
this returns the div tag with id='Layer1' which can be more than one. So it basically returns a list of all HTML elements whose div tags have id='Layer1' but unfortunately the HTML you were trying to parse has one such element. So it went out of bound.
You can probably use some interactive interpreter of python like bpython or ipython to test what are you getting in an object.? Happy Hacking!!!
from urllib.request import urlopen
from bs4 import BeautifulSoup
web_address=' http://www.basicwebs.co.uk/contact.htm'
html = urlopen(web_address)
bs = BeautifulSoup(html.read(), 'html.parser')
contact_info = bs.findAll('h2', {'align':'left'})[0]
for info in contact_info:
print(info.get_text())
hi so I need to retrieve the url for the first article on a term I search up on nytimes.com
So if I search for Apple. This link would return the result
http://query.nytimes.com/search/sitesearch?query=Apple&srchst=cse
And you just replace Apple with the term you are searching for.
If you click on that link you would see that NYtimes ask you if you mean Apple Inc.
I want to get the url for this link, and go to it.
Then you will just get a lot of information on Apple Inc.
If you scroll down you will see the articles related to Apple.
So what I ultimately want is the URL of the first article on this page.
So I really do not know how to go about this. Do I use Java, or what do I use? Any help would be greatly appreciated and I would put a bounty on this later, but I need the answer ASAP.
Thanks
EDIT: Can we do this in Java?
You can use Python with the standard urllib module to fetch the pages and the great HTML parser BeautifulSoup to obtain the information you need from the pages.
From the documentation of BeautifulSoup, here's sample code that fetches a web page and extracts some info from it:
import urllib2
from BeautifulSoup import BeautifulSoup
page = urllib2.urlopen("http://www.icc-ccs.org/prc/piracyreport.php")
soup = BeautifulSoup(page)
for incident in soup('td', width="90%"):
where, linebreak, what = incident.contents[:3]
print where.strip()
print what.strip()
print
This this is a nice and detailed article on the topic.
You certainly can do it in Java. Look at the HttpURLConnection class. Basically, you give it a URL, call the connect function, and you get back an input stream with the contents of the page, i.e. HTML text. You can then process that and parse out whatever information you want.
You're facing two challenges in the project you are describing. The first, and probably really the lesser challenge, is figuring out the mechanics of how to connect to a web page and get hold of the text within your program. The second and probably bigger challenge will be to figure out exactly how to extract the information you want from that text. I'm not clear on the details of your requirements, but you're going to have to sort through a ton of text to find what you're looking for. Without actually looking at the NY Times site at the momemnt, I'm sure it has all sorts of decorations like pretty pictures and the company logo and headlines and so on, and then there are going to be menus and advertisements and all sorts of stuff. I sincerely doubt that the NY Times or almost any other commercial web site is going to return a search page that includes nothing but a link to the article you are interested in. Somehow your program will have to figure out that the first link is to the "subscribe on line" page, the second is to an advertisement, the third is to customer service, the fourth and fifth are additional advertisements, the sixth is to the home page, etc etc until you finally get to the one you're actually interested in. How will you identify the interesting link? There are probably headings or formatting that make it recognizable to a human being, but you use a lot of intuition to screen out the clutter that can be difficult to reproduce in a program.
Good luck!
You can do this in C# using the HTML Agility Pack, or using LINQ to XML if the site is valid XHTML. EDIT: It isn't valid XHTML; I checked.
The following (tested) code will get the URL of the first search result:
var doc = new HtmlWeb().Load(#"http://query.nytimes.com/search/sitesearch?query=Apple&srchst=cse");
var url = HtmlEntity.DeEntitize(doc.DocumentNode.Descendants("ul")
.First(ul => ul.Attributes["class"] != null
&& ul.Attributes["class"].Value == "results")
.Descendants("a")
.First()
.Attributes["href"].Value);
Note that if their website changes, this code might stop working.