R (rvest) Web Scraping Multiple Pages - html

I am looking to scrape the results from the Philly DA Democratic Primary race. I want to scrape the ward-division results from the website. I need the ward-division number (e.g. 01-01), the name of the candidate (e.g. LARRY KRASNER), and the percent each candidate received. For this website, there are 86 pages of results at the ward-division level:
https://results.philadelphiavotes.com/ResultsSW.aspx?type=CTY&map=CTY#page-1
Using the SelectorGadget tool, the CSS for each are as follows:
ward-division numbers = ".precinct-results-orangebox-title h1"
name of candidates= ".precinct-results-databox1 h1"
percent results= "#Datawrapper 16DEM .bar-percent"
When I tried to initially scrape the website data, I used the following code:
#Read in the Data
daresults <- read_html (https://results.philadelphiavotes.com/ResultsSW.aspx type=CTY&map=CTY#page-1)
#Ward-Division Numbers
warddiv<-daresults %>%
html_nodes(".precinct-results-orangebox-title h1")%>%
html_text()
And I received a response of
character(0)
Any help on cleaning up the code and creating a loop to scrape all 86 pages would be appreciated. Thanks.

It looks like the data is stored as a JSON file. From the Network tab, from your browser's developer tools the files are located here:
https://phillyresws.azurewebsites.us/ResultsAjax.svc/GetMapData?type=CTY&category=PREC&raceID=16&osn=16&county=04&party=DEM&LanguageID=1
https://phillyresws.azurewebsites.us/ResultsAjax.svc/GetMapData?type=CTY&category=PREC&raceID=17&osn=17&county=04&party=REP&LanguageID=1
https://phillyresws.azurewebsites.us/ResultsAjax.svc/GetMapData?type=CTY&category=PREC&raceID=18&osn=18&county=04&party=DEM&LanguageID=1
https://phillyresws.azurewebsites.us/ResultsAjax.svc/GetMapData?type=CTY&category=PREC&raceID=19&osn=19&county=04&party=REP&LanguageID=1
Use jsonlite or another package to read the file and parse the file into a data frame.
For example:
url<-"https://phillyresws.azurewebsites.us/ResultsAjax.svc/GetMapData?type=CTY&category=PREC&raceID=16&osn=16&county=04&party=DEM&LanguageID=1"
jsonlite::fromJSON(url)

Related

rvest - find html-node with last page number

I'm learning web scraping and created a little exercise for myself to scrape all titles of a recipe site: https://pinchofyum.com/recipes?fwp_paged=1. (I got inspired by this post: https://www.kdnuggets.com/2017/06/web-scraping-r-online-food-blogs.html).
I want to scrape the value of the last page number, which is (at time of writing) number 64. You can find the number of pages at the bottom. I see that this is stored as "a.facetwp-page last", but for some reason cannot access this node. I can see that the page number values are stored as 'data-page', but I'm unable to get this value through 'html_attrs'.
I believe the parent node is "div.facetwp-pager" and I can access that one as follows:
library(rvest)
pg <- read_html("https://pinchofyum.com/recipes")
html_nodes(pg, "div.facetwp-pager")
But this is as far as I get. I guess I'm missing something small, but cannot figure out what it is. I know about Rselenium, but I would like to know if and how to get that last page value (64) with rvest.
Sometimes scraping with rvest doesn't work, especially when the webpage is dynamically generated with java script (I also wasn't able to scrape this info with rvest). In those cases, you can use the RSelenium package. I was able to scrape your desired element like this:
library(RSelenium)
rD <- rsDriver(browser = c("firefox")) #specify browser type you want Selenium to open
remDr <- rD$client
remDr$navigate("https://pinchofyum.com/recipes?fwp_paged=1") # navigates to webpage
webElem <- remDr$findElement(using = "css selector", ".last") #find desired element
txt <- webElem$getElementText() # gets us the HTML
#> txt
#>[[1]]
#>[1] "64"

How do I webscrape .dpbox table using selectorgadget with R (rvest)?

I've been trying to webscrape data from a specific website using selectorgadget in R. For example, I successfully webscraped from http://www.dotabuff.com/heroes/abaddon/matchups before. Usually, I just click on the tables I want using the selectorgadget Chrome extension and put the CSS Selection result into the code as follows.
urlx <- "http://www.dotabuff.com/heroes/abaddon/matchups"
rawData <- html_text(html_nodes(read_html(urlx),"td:nth-child(4) , td:nth-child(3), .cell-xlarge"))
In this case, the html_nodes function does return a whole bunch of nodes (340)
{xml_nodeset (340)}
However, when I try to webscrape off http://www.dotapicker.com/heroes/Abaddon using selectorgadget, which turns out to be this code:
urlx <- "http://www.dotapicker.com/heroes/abaddon"
rawData <- html_text(html_nodes(read_html(urlx),".ng-scope:nth-child(1) .ng-scope .ng-binding"))
Unfortunately, no nodes actually show up after the html_nodes function is called, and I get the result
{xml_nodeset (0)}
I feel like this has something to do with the nesting of the table in a drop down box (compared to previously, the table was right on the webpage itself) but I'm not sure how to get around it.
Thank you and I appreciate any help!
It seems like this page load dynamically some data using XHR. In Chrome you can check that by going to inspect and then the network tab. If you do this, you will see that there are a number of json files that are being loaded. You can scrape directly those json files and then parse them to extract the info you need. Here is a quick example:
library(httr)
library(jsonlite)
heroinfo_json <- GET("http://www.dotapicker.com/assets/json/data/heroinfo.json")
heroinfo_flat <- fromJSON(content(heroinfo_json, type = "text"))
#> No encoding supplied: defaulting to UTF-8.
winrates_json <- GET("http://www.dotapicker.com/assets/dynamic/winrates10d.json")
winrates_flat <- fromJSON(content(winrates_json, type = "text"))
#> No encoding supplied: defaulting to UTF-8.

Extracting all (possible) optional date values from web page [R]

In this url string, "toDate=1399849199999" part of the string refers to UNIX time expressed in milliseconds which is used to extract the Premier league table for a particular day.
In this case, UNIX time refers to 11. may of 2014.
as.POSIXlt (1399849199999/1000, tz = "GMT", origin = "1970-01-01")
I would like to retrieve all possible UNIX time values for a particular month. For url provided here, those 6 values are stored in webpage source code and it looks like this:
<select name="toDate" id="date" class="selectToSlider" widget="selectToSlider" labels="18" tooltip="false" wrapperClass="selectToSliderWrapper selectToSliderMatchDate"><optgroup label="results"><option value="1399157999999">SAT 03</option><option value="1399244399999">SUN 04</option><option value="1399330799999">MON 05</option><option value="1399417199999" selected="selected">TUE 06</option><option value="1399503599999">WED 07</option><option value="1399849199999">SUN 11</option></optgroup><optgroup label="fixtures"></optgroup></select>
Previously I used to extract such information with regular expressions but it was the pain in the neck (***) and I want to do this in some easier way.
I appreciate if someone can provide the code (possibly with explained steps) that can extract those values using some web scraping packages in R, preferably XML. I tried it by myself but I was unsuccessful...
We can try using XML package to parse the html from the link you provided, then extract the specific information required (out of the whole html) using xpath:
library(XML)
EPL.URL <- "http://www.premierleague.com/en-gb/matchday/league-table.html?season=2013-2014&month=MAY&timelineView=date&toDate=1399849199999&tableView=CURRENT_STANDINGS"
EPL.doc <- htmlParse(EPL.URL)
xpathSApply(EPLdoc, "//optgroup[#label='results']/option", xmlGetAttr, "value")
rvest makes this pretty easy. Look for the "option" nodes, then grab the "value" attributes.
library("rvest")
h <- read_html('<select name="toDate" id="date" class="selectToSlider" widget="selectToSlider" labels="18" tooltip="false" wrapperClass="selectToSliderWrapper selectToSliderMatchDate"><optgroup label="results"><option value="1399157999999">SAT 03</option><option value="1399244399999">SUN 04</option><option value="1399330799999">MON 05</option><option value="1399417199999" selected="selected">TUE 06</option><option value="1399503599999">WED 07</option><option value="1399849199999">SUN 11</option></optgroup><optgroup label="fixtures"></optgroup></select>')
h %>% html_nodes("option") %>% html_attr("value")
[1] "1399157999999" "1399244399999" "1399330799999"
[4] "1399417199999" "1399503599999" "1399849199999"

R - Extracting Tables From Websites Using XML Package

I am trying to replicate the method used in a previous answer here Scraping html tables into R data frames using the XML package for my own work but cannot get the data to extract. The website I am using is:
http://www.footballfanalytics.com/articles/football/euro_super_league_table.html
I just wish to extract a table of each team name and their current rating score. My code is as follows:
library(XML)
theurl <- "http://www.footballfanalytics.com/articles/football/euro_super_league_table.html"
tables <- readHTMLTable(theurl)
n.rows <- unlist(lapply(tables, function(t) dim(t)[1]))
tables[[which.max(n.rows)]]
This produces the error message
Error in tables[[which.max(n.rows)]] :
attempt to select less than one element
Could anyone suggest a solution please? Is there something in this particular site causing this not to work? Or is there a better alternative method I can try? Thanks
Seems as if the data is loaded via javascript. Try:
library(XML)
theurl <- "http://www.footballfanalytics.com/xml/esl/esl.xml"
doc <- xmlParse(theurl)
cbind(team = xpathSApply(doc, "/StatsData/Teams/Team/Name", xmlValue),
points = xpathSApply(doc, "/StatsData/Teams/Team/Points", xmlValue))

Extracting population data from website; wiki town webpages

G'day Everyone,
I am looking for a raster layer for human population/habitation in Australia. I have tried finding some free datasets online but couldn't really find anything in a useful formate. I thought it might be interesting to try and scrape population data from wikipedia and make my own raster layer. To this end I have tried getting the info from wiki, but not knowing anything about html has not help me.
The idea is to supply a list of all the towns in Australia that have wiki pages and extract the appropriate data into a data.frame.
I can get the webpage source data into R, but am stuck on how to extract the particular data that I want. The code below shows where I am stuck, any help would be really appreciated or some hints in the right direction.
I thought I might be able to use readHTMLTable() because, in the normal webpage, the info I want is off to the right in a nice table. But when I use this function I get an error (below). Is there any way I can specify this table when I am getting the source info?
Sorry if this question doesn't make much sense, I don't have any idea what I am doing when it comes to searching HTML files.
Thanks for your help, it is greatly appreciated!
Cheers,
Adam
require(RJSONIO)
loc.names <- data.frame(town = c('Sale', 'Bendigo'), state = c('Victoria', 'Victoria'))
u <- paste('http://en.wikipedia.org/wiki/',
sep = '', loc.names[,1], ',_', loc.names[,2])
res <- lapply(u, function(x) htmlParse(x))
Error when I use readHTMLTable:
tabs <- readHTMLTable(res[1])
Error in (function (classes, fdef, mtable) :
unable to find an inherited method for function ‘readHTMLTable’ for signature ‘"list"’
For instance, some of the data I need looks like this in the html stuff. My question is how do I specify these locations in the HTML stuff I have?
/ <span class="geo">-38.100; 147.067
title="Victoria (Australia)">Victoria</a>. It has a population (2011) of 13,186
res returns a list in this case you need to use res[[1]] rather then res[1] to access its elements.
Using readHTMLTable on these elements will give you all tables. The tables with geo info is contained in a table with class = "infobox vcard" you can just extract these tables seperately then pass them to readHTMLTable
require(XML)
lapply(sapply(res, getNodeSet, path = '//*[#class="infobox vcard"]')
, readHTMLTable)
If you are not familiar with xpaths the selectr package allows you to use css selectors which maybe easier.
require(selectr)
> querySelectorAll(res[[1]], "table span .geo")
[[1]]
<span class="geo">-38.100; 147.067</span>
[[2]]
<span class="geo">-38.100; 147.067</span>