How to read a <li> table in a webpage - html

I debug the program many times to get the result as follows:
url 研究所知识库列表
/handle/1471x/1 力学研究所
/handle/1471x/8865 半导体研究所
However, no metter what parameters I use, the result is not correct. The content in this table is one part of the basis of my further analysis, and I am very trembled for it. I'm looking forward to your help with great sincerity.
## download community-list ---the 1st level of IR Grid
#loading webpage and analyzing
community_url<-"http://www.irgrid.ac.cn/community-list"
com_source <- readLines(community_url, encoding = "UTF-8")
com_parsed <- htmlTreeParse(com_source, encoding = "UTF-8", useInternalNodes = TRUE)
# get table specs
tableNodes <- getNodeSet(com_parsed, "//table")
com_tb<-readHTMLTable(tableNodes[[8]], header=TRUE)
# get External links
xpath <- "//a/#href"
getHTMLExternalFiles(tableNodes[[8]], xpQuery = xpath)

it is unclear exactly what you want your end result to look like but if you modify your xpath statements a bit to take advantage of the DOM structure you can get something like this:
library(XML)
community_url<-"http://www.irgrid.ac.cn/community-list"
com_source <- readLines(community_url, encoding = "UTF-8")
com_parsed <- htmlTreeParse(com_source, encoding = "UTF-8", useInternalNodes = TRUE)
list_header <- xpathSApply(com_parsed, '//table[.//li]//h1', xmlValue)
hrefs <- xpathSApply(com_parsed, '//li[#class="communityLink"]//#href', function(x) unname(x))
display_text <- xpathSApply(com_parsed, '//li[#class="communityLink"]//a', xmlValue)
table_data <- cbind(display_text, hrefs)
colnames(table_data) <- c(list_header, "url")
table_data
console output causes stackoverflow to think this answer is spam but here is a screen shot:

Related

R - Issue with the DOM of the danish parliament (webscraping)

I've been working on a webscraping project for the political science department at my university.
The Danish parliament is very transparent about their democratic process and they are uploading all the legislative documents on their website. I've been crawling over all pages starting 2008. Right now I'm parsing the information into a dataframe and I'm having an issue that I was not able to resolve so far.
If we look at the DOM we can see that they named most of the objects div.tingdok-normal. The number of objects varies between 16-19. To parse the information correctly for my dataframe I tried to grep out the necessary parts according to patterns. However, the issue is that sometimes my pattern match more than once and I don't know how to tell R that I only want the first match.
for the sake of an example I include some code:
final.url <- "https://www.ft.dk/samling/20161/lovforslag/l154/index.htm"
to.save <- getURL(final.url)
p <- read_html(to.save)
normal <- p %>% html_nodes("div.tingdok-normal > span") %>% html_text(trim =TRUE)
tomatch <- c("Forkastet regeringsforslag", "Forkastet privat forslag", "Vedtaget regeringsforslag", "Vedtaget privat forslag")
type <- unique (grep(paste(tomatch, collapse="|"), results, value = TRUE))
Maybe you can help me with that
My understanding is that you want to extract the text of the webpage, because the "tingdok-normal" are related to the text. I was able to get the text of the webpage with the following code. Also, the following code identifies the position of the first "regex hit" of the different patterns to match.
library(pagedown)
library(pdftools)
library(stringr)
pagedown::chrome_print("https://www.ft.dk/samling/20161/lovforslag/l154/index.htm",
"C:/.../danish.pdf")
text <- pdftools::pdf_text("C:/.../danish.pdf")
tomatch <- c("(A|a)ftalen", "(O|o)pholdskravet")
nb_Tomatch <- length(tomatch)
list_Position <- list()
list_Text <- list()
for(i in 1 : nb_Tomatch)
{
# Locates the first hit of the regex
# To locate all regex hit, use stringr::str_locate_all
list_Position[[i]] <- stringr::str_locate(text , pattern = tomatch[i])
list_Text[[i]] <- stringr::str_sub(string = text,
start = list_Position[[i]][1, 1],
end = list_Position[[i]][1, 2])
}
Here is another approach :
library(RDCOMClient)
library(stringr)
library(rvest)
url <- "https://www.ft.dk/samling/20161/lovforslag/l154/index.htm"
IEApp <- COMCreate("InternetExplorer.Application")
IEApp[['Visible']] <- TRUE
IEApp$Navigate(url)
Sys.sleep(5)
doc <- IEApp$Document()
html_Content <- doc$documentElement()$innerText()
tomatch <- c("(A|a)ftalen", "(O|o)pholdskravet")
nb_Tomatch <- length(tomatch)
list_Position <- list()
list_Text <- list()
for(i in 1 : nb_Tomatch)
{
# Locates the first hit of the regex
# To locate all regex hit, use stringr::str_locate_all
list_Position[[i]] <- stringr::str_locate(text , pattern = tomatch[i])
list_Text[[i]] <- stringr::str_sub(string = text,
start = list_Position[[i]][1, 1],
end = list_Position[[i]][1, 2])
}

Read HTML into R

I would like R to take a word in a column in a dataset, and return a value from a website. The code I have so far is below. So, for each word in the data frame column, it will go to the website and return the pronunciation (for example, the pronunciation on http://www.speech.cs.cmu.edu/cgi-bin/cmudict?in=word&stress=-s is "W ER1 D"). I have looked at the HTML of the website, and it's unclear what I would need to enter to return this value - it's between <tt> and </tt> but there are many of these. I'm also not sure how to then get that value into R. Thank you.
library(xml2)
for (word in df$word) {
result <- read_html("http://www.speech.cs.cmu.edu/cgi-bin/cmudict?in="word"&stress=-s")
}
Parsing HTML is a tricky task in R. There are a couple ways though. If the HTML converts well to XML and the website/API always returns the same structure then you can use tools to parse XML. Otherwise you could use regex and call stringr::str_extract() on the HTML.
For your case, it is fairly easy to get the value you're looking for using XML tools. It's true that there are a lot of <tt> tags but the one you want is always in the second instance so you can just pull out that one.
#load packages. dplyr is just to use the pipe %>% function
library(httr)
library(XML)
library(dplyr)
#test words
wordlist = c('happy', 'sad')
for (word in wordlist){
#build the url and GET the result
url <- paste0("http://www.speech.cs.cmu.edu/cgi-bin/cmudict?in=",word,"&stress=-s")
h <- handle(url)
res <- GET(handle = h)
#parse the HTML
resXML <- htmlParse(content(res, as = "text"))
#retrieve second <tt>
print(getNodeSet(resXML, '//tt[2]') %>% sapply(., xmlValue))
#don't abuse your API
Sys.sleep(0.1)
}
>[1] "HH AE1 P IY0 ."
>[1] "S AE1 D ."
Good luck!
EDIT: This code will return a dataframe:
#load packages. dplyr is just to use the pipe %>% function
library(httr)
library(XML)
library(dplyr)
#test words
wordlist = c('happy', 'sad')
#initializae the dataframe with pronunciation field
pronunciation_list <- data.frame(pronunciation=character(),stringsAsFactors = F)
#loop over the words
for (word in wordlist){
#build the url and GET the result
url <- paste0("http://www.speech.cs.cmu.edu/cgi-bin/cmudict?in=",word,"&stress=-s")
h <- handle(url)
res <- GET(handle = h)
#parse the HTML
resXML <- htmlParse(content(res, as = "text"))
#retrieve second <tt>
to_add <- data.frame(pronunciation=(getNodeSet(resXML, '//tt[2]') %>% sapply(., xmlValue)))
#bind the data
pronunciation_list<- rbind(pronunciation_list, to_add)
#don't abuse your API
Sys.sleep(0.1)
}

Avoid getting "glued" words with R webscraping

When I use both of the two following blocks of code code, I get "glued" words, and by that i mean words that are not not separated by a space but they should, and this is a problem. In the original HTML, it seem like they're separated by a <b> and i'm not beeing able to handle this. The two blocks do the same thing by different ways.
library(XML)
library(RCurl)
# Block 1---------
url <- "https://www.letras.mus.br/red-hot-chili-peppers/32739/"
u <- readLines(url)
h <- htmlTreeParse(file=u,
asText=TRUE,
useInternalNodes = TRUE,
encoding = "utf-8")
song <- getNodeSet(doc=h, path="//article", fun=xmlValue)
# Block 2---------
u <- "https://www.letras.mus.br/red-hot-chili-peppers/32739/"
h <- htmlParse(getURL(u))
song <- xpathSApply(h, path = "//article", fun = xmlValue)
Which returns something like:
[1] "Sometimes I feelLike I don't have a partnerSometimes I feelLike my only friendIs the city I live inThe city of angelsLonely as I amTogether we cryI drive on her streets'Cause she's my companionI walk through her hills'Cause she knows who I amShe sees my good deedsAnd she kisses me windyI never worryNow that is a lieI don't ever wanna feelLike I did that dayBut take me to the place I loveTake me all the wayIt's hard to believeThat there's nobody out thereIt's hard to believeThat I'm all aloneAt...
I was able to retrieve words with the following code :
library(RSelenium)
shell('docker run -d -p 4445:4444 selenium/standalone-firefox')
remDr <- remoteDriver(remoteServerAddr = "localhost", port = 4445L, browserName = "firefox")
remDr$open()
remDr$navigate("https://www.letras.mus.br/red-hot-chili-peppers/32739/")
remDr$screenshot(display = TRUE, useViewer = TRUE)
page_Content <- remDr$getPageSource()[[1]]
list_Text_Song <- list()
for(i in 1 : 30)
{
print(i)
web_Obj <- tryCatch(remDr$findElement("xpath", paste0("//*[#id='js-lyric-cnt']/article/div[2]/div[2]/p[", i, "]")), error = function(e) NA)
list_Text_Song[[i]] <- tryCatch(web_Obj$getElementText(), error = function(e) NA)
}
list_Text_Song <- unlist(list_Text_Song)
list_Text_Song <- list_Text_Song[!is.na(list_Text_Song)]
The words are not glued with this approach.

rvest cannot find node with xpath

This is the website I scapre
ppp projects
I want to use xpath to select the node like below
The xpath I get by use inspect element is "//*[#id="pppListUl"]/li1/div2/span2/span"
My scrpits are like below:
a <- html("http://www.cpppc.org:8082/efmisweb/ppp/projectLivrary/toPPPList.do")
b <- html_nodes(a, xpath = '//*[#id="pppListUl"]/li[1]/div[2]/span[2]/span')
b
Then I got the result
{xml_nodeset (0)}
Then I check the page source, I didn't even find anything about the project I selected.
I was wondering why I cannot find it in the page source, and in turn, how can I get the node by rvest.
It makes an XHR request for the content. Just work with that data (it's pretty clean):
library(httr)
POST('http://www.cpppc.org:8082/efmisweb/ppp/projectLivrary/getPPPList.do?tokenid=null',
encode="form",
body=list(queryPage=1,
distStr="",
induStr="",
investStr="",
projName="",
sortby="",
orderby="",
stageArr="")) -> res
content(res, as="text") %>%
jsonlite::fromJSON(flatten=TRUE) %>%
dplyr::glimpse()
(StackOverflow isn't advanced enough to let me post the output of that as it thinks it's spam).
It's a 4 element list with fields totalCount, list (which has the actual data), currentPage and totalPage.
It looks like you can change the queryPage form variable to iterate through the pages to get the whole list/database, something like:
library(httr)
library(purrr)
library(dplyr)
get_page <- function(page_num=1, .pb=NULL) {
if (!is.null(.pb)) pb$tick()$print()
POST('http://www.cpppc.org:8082/efmisweb/ppp/projectLivrary/getPPPList.do?tokenid=null',
encode="form",
body=list(queryPage=page_num,
distStr="",
induStr="",
investStr="",
projName="",
sortby="",
orderby="",
stageArr="")) -> res
content(res, as="text") %>%
jsonlite::fromJSON(flatten=TRUE) -> dat
dat$list
}
n <- 5 # change this to the value in `totalPage`
pb <- progress_estimated(n)
df <- map_df(1:n, get_page, pb)

Converting JSON file to data.frame

I'm having a heck of a time trying to convert a JSON file to a data frame. I have searched and tried to use others' code to my example but none seem to fit. The output is always still a list instead of a data frame.
library(jsonlite)
URL <- getURL("http://scores.nbcsports.msnbc.com/ticker/data/gamesMSNBC.js.asp?xml=true&sport=NBA&period=20160104")
URLP <- fromJSON(URL, simplifyDataFrame = TRUE, flatten = FALSE)
URLP
Here is what format the answer always ends up in.
$games
[1] "<ticker-entry gamecode=\"2016010405\" gametype=\"Regular Season\"><visiting-team display_name=\"Toronto\" alias=\"Tor\" nickname=\"Raptors\" id=\"28\" division=\"ECA\" conference=\"EC\" score=\"\"><score heading=\"\" value=\"0\" team-fouls=\"0\"></score><team-record wins=\"21\" losses=\"14\"></team-record><team-logo link=\"http://hosted.stats.com/nba/logos/nba_50x33/Toronto_Raptors.png\" gz-image=\"http://hosted.stats.com/GZ/images/NBAlogos/TorontoRaptors.png\"></team-logo></visiting-team><home-team display_name=\"Cleveland\" alias=\"Cle\" nickname=\"Cavaliers\" id=\"5\" division=\"ECC\" conference=\"EC\" score=\"\"><score heading=\"\" value=\"0\" team-fouls=\"0\"></score><team-record wins=\"22\" losses=\"9\" ties=\"\"></team-record><team-logo link=\"http://hosted.stats.com/nba/logos/nba_50x33/Cleveland_Cavaliers.png\" gz-image=\"http://hosted.stats.com/GZ/images/NBAlogos/ClevelandCavaliers.png\"></team-logo></home-team><gamestate status=\"Pre-Game\" display_status1=\"7:00 PM\" display_status2=\"\" href=\"http://scores.nbcsports.msnbc.com/nba/preview.asp?g=2016010405\" tv=\"FSOH/SNT\" gametime=\"7:00 PM\" gamedate=\"1/4\" is-dst=\"0\" is-world-dst=\"0\"></gamestate></ticker-entry>"
With regards to #jbaums comment, you could try
library(jsonlite)
library(RCurl)
library(dplyr)
library(XML)
URL <- getURL("http://scores.nbcsports.msnbc.com/ticker/data/gamesMSNBC.js.asp?xml=true&sport=NBA&period=20160104")
lst <- lapply(fromJSON(URL)$games, function(x) as.data.frame(t(unlist(xmlToList(xmlParse(x)))), stringsAsFactors=FALSE))
df <- bind_rows(lst)
View(df)
... in theory. However, as #hrbrmstr pointed out: practically, this would violate the website owner's terms of service.