R loop with html_nodes ( rvest ) isn´t catching all data - html

I would like to make a loop with html_node to catch some the value of nodes (nodes no text), that is, I have some values
library(rvest)
country <- c("Canada", "US", "Japan", "China")
With those values ("Canada","us", ...), I´ve done a loop which creates a URL by pasting each value with "https://en.wikipedia.org/wiki/", after that, with each new html apply read_html(i) and a sequences of codes to catch finally a node with html_nodes ('a.page-link') -yes! a node, not a text- and save that html_nodes (...) as.character in a data.frame (or could be a list).
dff<- NULL
for ( i in country ) {
url<-paste0("https://en.wikipedia.org/wiki/",i)
page<- read_html(url)
b <- page%>%
html_nodes ('h2.flow-title') %>%
html_nodes ('a.page-link') %>%
as.character()
dff<- data.frame(b)
}
The problem is this code only save the data from the last country, that is, run the first country and obtain the html_nodes(saving it), but when run the next country the first data is erased and replace by this new , and so on, obtaining as final result just the dat from the last country.
I would be grateful with your help!

As the comment mentioned this line: dff<- data.frame(b) is over writing dff on each loop iteration. On solution is to create an empty list and append the data to the list.
In this example the list items are named for the country queried.
library(rvest)
country <- c("Canada", "US", "Japan", "China")
#initialize the empty list
dff<- list()
for ( i in country ) {
url<-paste0("https://en.wikipedia.org/wiki/",i)
page<- read_html(url)
b <- page%>%
html_nodes ('h2.flow-title') %>%
html_nodes ('a.page-link') %>%
as.character()
#append new data onto the list
dff[[i]]<- data.frame(b)
}
To access the data, one can use dff$Canada, or lapply to process the entire list.
Note: I ran your example which returned no results, better double check the node ids.

Related

How to extract table, convert it to data frame ,write as csv file and deal with child tables?

I cant write the file as a csv. file there is an error, as I want to Extract bike sharing system data from a Wiki page and convert the data to a data frame. but When I use the head function to see the table or str. function I cant determine the table it came out with so many unorganized details.
Also Note that this HTML page at least contains three child nodes under the root HTML node. So, you will need to use (html_nodes(root_node, "table") function to get all its child nodes:
<html>
<table>(table1)</table>
<table>(table2)</table>
<table>(table3)</table>
...
</html>
url<- "https://en.wikipedia.org/wiki/List_of_bicycle-sharing_systems"
root_node<-read_html(url)
table_nodes <- html_nodes(root_node,"table")
Bicycle_sharing <- html_table(table_nodes, fill = TRUE )
head(Bicycle_sharing)
summary(Bicycle_sharing)
str(Bicycle_sharing)
## Exporting the date frame as csv. file.
write.csv(mtcars,"raw_bike_sharing_systems.csv",row.names = FALSE)
library(tidyverse)
library(rvest)
data <- "https://en.wikipedia.org/wiki/List_of_bicycle-sharing_systems" %>%
read_html() %>%
html_table() %>%
getElement(2) %>%
janitor::clean_names()
data %>%
write_csv(file = "bike_sharing.csv")

Create a loop within a function so that URLs return a dataframe

I was provided with a list of identifiers (in this case the identifier is called an NPI). These identifiers can be copied and pasted to this website (https://npiregistry.cms.hhs.gov/registry/?). I want to return the name of the NPI number, name of the physician, address, phone number, and specialty.
I have over 3,000 identifiers so a copy and paste is not efficient and not easily repeatable for future use.
If possible, I would like to create a list of URLs, pass them into a function, and received a dataframe that provides me with the variables mentioned above (NPI, NAME, ADDRESS, PHONE, SPECIALTY).
I was able to write a function that produces the URLs needed:
Here are some NPI numbers for reference: 1417024746, 1386790517, 1518101096, 1255500625.
This is my code for reading in the file that contains my NPIs
npiList <- c("1417024746", "1386790517", "1518101096", "1255500625")
npiList <- as.list(npiList)
npiList <- unlist(npiList, use.names = FALSE)
This is the function to return the list of URLs:
npiaddress <- function(x){
url <- paste("https://npiregistry.cms.hhs.gov/registry/search-results-
table?number=",x,"&addressType=ANY", sep = "")
return(url)
}
I saved the list to a variable and perhaps this is my downfall:
npi_urls <- npiaddress(npiList)
From here I wrote a function that can accept a single URL, retrieves the data I want and turns it into a dataframe. My issue is that I cannot pass multiple URLs:
npiLookup <- function (x){
url <- x
webpage <- read_html(url)
npi_html <- html_nodes(webpage, "td")
npi <- html_text(npi_html)
npi[4] <- gsub("\r?\n|\r", " ", npi[4])
npi[4] <- gsub("\r?\t|\r", " ", npi[4])
npiFinal <- npi[c(1:2,4:6)]
npiFinal <- as.data.frame(npiFinal)
npiFinal <- t(npiFinal)
npiFinal <- as.data.frame(npiFinal)
names(npiFinal) <- c("NPI", "NAME", "ADDRESS", "PHONE", "SPECIALTY")
return(npiFinal)
}
For example:
If I wanted to get a dataframe for the following identifier (1417024746), I can run this and it works:
x <- npiLookup("https://npiregistry.cms.hhs.gov/registry/search-results-table?number=1417024746&addressType=ANY")
View(x)
My output for the example returns the NPI, NAME, ADDRESS, PHONE, SPECIALTY as desired, but again, I need to do this for several thousand NPI identifiers. I feel like I need a loop within npiLookup. I've also tried to put npi_urls into the npiLookup function but it does not work.
Thank you for any help and for taking the time to read.
You're most of the way there. The final step uses this useful R idiom:
do.call(rbind,lapply(npiList,function(npi) {url=npiaddress(npi); npiLookup(url)}))
do.call is a base R function that applies a function (in this case rbind) to the list produced by lapply. That list is the result of running your npiLookup function on the url produced by your npiaddress for each element of npiList.
A few further comments for future reference should anyone else come upon this question: (1) I don't know why you're doing the as.list, unlist sequence at the beginning; it's redundant and probably unnecessary. (2) The NPI registry provides a programming interface (API) that avoids the need to scrape data from the HTML pages; this might be more robust in the long run. (3) The NPI registry provides the entire dataset as a downloadable file; this might have been an easier way to go.

readHTMLTable in R - skipping NULL values

I am attempting to use the R function readHTMLTable to gather data from the online database at www.racingpost.com. I have a CSV file with 30,000 unique ids which can be used to identify individual horses. Unfortunately a small number of these ids are leading readHTMLTable to return the error:
Error in (function (classes, fdef, mtable) :
unable to find an inherited method for function ‘readHTMLTable’ for signature ‘"NULL"’
My question is - is it possible to set up a wrapper function that will skip the ids which return NULL values but then continue reading the remaining HTML tables? The reading stops at each NULL value.
What I have tried so far is this:
ids = c(896119, 766254, 790946, 556341, 62736, 660506, 486791, 580134, 0011, 580134)
which are all valid horse ids bar the 0011 which will return a NULL value. Then:
scrapescrape <- function(x) {
link <- paste0("http://www.racingpost.com/horses/horse_home.sd?horse_id=",x)
if (!is.null(readHTMLTable(link, which=2))) {
Frame1 <- readHTMLTable(link, which=2)
}
}
total_data = c(0)
for (id in ids) {
total_data = rbind(total_data, scrapescrape(id))
}
However, I think the error is returned at the if statement which means the function stops when it reaches the first NULL value. Any help would be greatly appreciated - many thanks.
You could analyse the HTML first (inspect the page you get, and find a way to recognise a false result), before reading the HTML table.
But you can also make sure the function returns nothing (NA) when an error is thrown, like so:
library(XML)
scrapescrape <- function(x) {
link <- paste0("http://www.racingpost.com/horses/horse_home.sd?horse_id=",x)
tryCatch(readHTMLTable(link, which=2), error=function(e){NA})
}
}
ids <- c(896119, 766254, 790946, 556341, 62736, 660506, 486791, 580134, 0011, 580134)
lst <- lapply(ids, scrapescrape)
str(lst)
Using rvest you can do:
require(rvest)
require(purrr)
paste0("http://www.racingpost.com/horses/horse_home.sd?horse_id=", ids) %>%
map(possibly(~html_session(.) %>%
read_html %>%
html_table(fill = TRUE) %>%
.[[2]],
NULL)) %>%
discard(is.null)
The last line discards all "failed" attempts. If you want to keep them just drop the last line

Is it possible, in R, to access the values of a list with a for loop on the names of the fields?

I have a big json file, containing 18 fields, some of which contain some other subfields. I read the file in R in the following way:
json_file <- "daily_profiles_Bnzai_20150914_20150915_20150914.json"
data <- fromJSON(sprintf("[%s]", paste(readLines(json_file), collapse=",")))
This gives me a giant list with all the fields contained in the json file. I want to make it into a data.frame and do some operations in the meantime. For example if I do:
doc_length <- data.frame(t(apply(as.data.frame(data$doc_lenght_map), 1, unlist)))
os <- data.frame(t(apply(as.data.frame(data$operating_system), 1, unlist)))
navigation <- as.data.frame(data$navigation)
monday <- data.frame(t(apply(navigation[,grep("Monday",names(data$navigation))],1,unlist)))
Monday <- data.frame(apply(monday, 1, sum))
works fine, I get what I want, with all the right subfields and then I want to join them in a final data.frame that I will use to do other operations.
Now, I'd like to do something like that on the subset of fields where I don't need to do operations. So, for example, the days of the week contained in navigation are not included. I'd like to have something like (suppose I have a data.frame df):
for(name in names(data))
{
df <- cbind(df, data.frame(t(apply(as.data.frame(data$name), 1, unlist)))
}
The above loop gives me errors. So, what I want to do is finding a way to access all the fields of the list in an automatic way, as in the loop, where the iterator "name" takes on all the fields of the list, without having to call them singularly and then doing some operations with those fields. I tried even with
for(name in names(data))
{
df <- cbind(df, data.frame(t(apply(as.data.frame(data[name]), 1, unlist)))
}
but it doesn't take all of the subfields. I also tried with
data[, name]
but it doesn't work either. So I think I need to use the "$" operator.
Is it possible to do something like that?
Thank you a lot!
Davide
Like the other commenters, I am confused, but I will throw this out to see if it might point you in the right direction.
# make mtcars a list as an example
data <- lapply(mtcars,identity)
do.call(
cbind,
lapply(
names(data),
function(name){
data.frame(data[name])
}
)
)

Using \Sexpr{} in LaTeX tabular environment

I am trying to use \Sexpr{} to include values from my R objects in a LaTeX table. I am essentially trying to replicate the summary output of a lm object in R because xtable's built in methods xtable.lm and xtable.summary.lm don't seem to include the Fstats, adjusted R-squared, etc (all the stuff at the bottom of the summary printout of the lm object in R console) So I tried accomplishing this by building a matrix to replicate the xtable.summary.lm output then construct a data frame of the relevant info for the extra stuff so I can refer to the values using \Sexpr{}. I tried doing this by using add.to.row to append the \multicolumn{} command in order to merge all columns of the last row of the LaTeX table and then just pass all the information I need into that cell of the table.
The problem is that I get an "Undefined control sequence" for the \Sexpr{} expression in the \multicolumn{} expression. Are these two not compatible? If so, what am I doing wrong and if not does anyone know how to do what I am trying to do?
Thanks,
Here is the relevant part of my code:
<<Test, results=tex>>=
model1 <- lm(stndfnl ~ atndrte + frosh + soph)
# Build matrix to replicate xtable.summary.lm output
x <- summary(model1)
colnames <- c("Estimate", "Std. Error", "t value", "Pr(<|t|)")
rownames <- c("(Intercept)", attr(x$terms, "term.labels"))
fpval <- pf(x$fstatistic[1],x$fstatistic[2], x$fstatistic[3], lower.tail=FALSE)
mat1 <- matrix(coef(x), nrow=length(rownames), ncol=length(colnames), dimnames=list(rownames,colnames))
# Make a data frame for extra information to be called by \Sexpr in last row of table
residse <- x$sigma
degf <- x$df[2]
multr2 <- x$r.squared
adjr2 <- x$adj.r.squared
fstat <- x$fstatistic[1]
fstatdf1 <- x$fstatistic[2]
fstatdf2 <- x$fstatistic[3]
extradat <- data.frame(v1 = round(residse,4), v2 =degf, v3=round(multr2,4), v4=round(adjr2,4),v5=round(fstat,3), v6=fstatdf1, v7=fstatdf2, v8=round(fpval,6))
addtorow<- list()
addtorow$pos <-list()
addtorow$pos[[1]] <- dim(mat1)[1]
addtorow$command <-c('\\hline \\multicolumn{5}{l}{Residual standard error:\\Sexpr{extradat$v1}} \\\\ ')
print(xtable(mat1, caption="Summary Results for Regression in Equation \\eqref{model1} ", label="tab:model1"), add.to.row=addtorow, sanitize.text.function=NULL, caption.placement="top")
You don't need to have Sexpr in your R code; the R code can use the expressions directly. Sexpr is not a LaTeX command, even though it looks like one; it's an Sweave command, so it doesn't work to have it as output from R code.
Try
addtorow$command <-paste('\\hline \\multicolumn{5}{l}{Residual standard error:',
extradat$v1, '} \\\\ ')
Also, no need to completely recreate the matrix used by xtable, you can just build on the default output. Building on what you have above, something like:
mytab <- xtable(model1, caption="Summary Results", label="tab:model1")
addtorow$pos[[1]] <- dim(mytab)[1]
print(mytab, add.to.row=addtorow, sanitize.text.function=NULL,
caption.placement="top")
See http://people.su.se/~lundh/reproduce/sweaveintro.pdf for an example which you might be able to use as is.