I'm trying to web scrape a public data provider but I got stuck when I had to click on a button passing a parameter to the JS. Here's my attempt:
require(rvest)
url <- 'https://myterna.terna.it/SunSet/Public/'
page <- url %>% read_html()
node_link <- page %>% html_node('.sub-item:nth-child(1) .postlink')
In node_link I can easily find the target page as the href of this HTML tag:
<a href="/SunSet/Public/Pubblicazioni"
class="postlink"
data-params="filter.IdSezione=52767620567B3077E053A8829B0A9478">
The point is that I cannot easily retrieve the content of the linked page because there are other buttons that point to the same link. The only difference between the various buttons is the data-params attribute which probably has to be given to the JS in order to retrieve the specific content.
Any ideas on how to solve the issue?
Obligatory heads-up:
It's not really clear if the site allow scraping, the Legal Notice says Authorization is granted for the reproduction of documents published on this website exclusively for personal use and not for commercial purposes, provided the name of source is properly indicated.
Use this respecting their terms of service.
Inspecting the network activity when clicking on that link, we can see the webpage makes a POST request to https://myterna.terna.it/SunSet/Public/Pubblicazioni/List. We can find both the requested headers and the params sent.
par <- '{"draw":1,"columns":[{"data":0,"name":"","searchable":true,"orderable":true,"search":{"value":"","regex":false}},{"data":1,"name":"","searchable":true,"orderable":true,"search":{"value":"","regex":false}},{"data":2,"name":"","searchable":false,"orderable":false,"search":{"value":"","regex":false}},{"data":3,"name":"","searchable":false,"orderable":false,"search":{"value":"","regex":false}},{"data":4,"name":"","searchable":false,"orderable":false,"search":{"value":"","regex":false}},{"data":5,"name":"","searchable":false,"orderable":false,"search":{"value":"","regex":false}},{"data":6,"name":"","searchable":false,"orderable":false,"search":{"value":"","regex":false}},{"data":7,"name":"","searchable":false,"orderable":false,"search":{"value":"","regex":false}}],"order":[],"start":0,"length":10,"search":{"value":"","regex":false},"filter":{"IdSezione":"52767620567B3077E053A8829B0A9478","Titolo":"","Id":"","ExtKey":"","TipoPubblicazione":"","SheetName":"","Anno":"2017","Mese":"7","Giorno":"","DataPubblicazione":"","TipoDatoPubblicazione":""},"details":{}}'
This is json, we can parse and change its values if we want (although I tried a few different filters and it does no respond much)
par <- jsonlite::fromJSON(par)
par$filter$Mese <- '7'
As for headers only X-Requested-With:MLHttpRequest is really needed so we can cut it down to that.
response <- POST('https://myterna.terna.it/SunSet/Public/Pubblicazioni/List',
add_headers('X-Requested-With' = 'XMLHttpRequest'),
body = par,
encode = 'json')
json_data <- content(response)$data
This returns a list, that we can safely transform to a dataframe for convenient use:
df <- data.frame(matrix(unlist(json_data), nrow=length(json_data), byrow=TRUE))
head(df, 2)
#> X1
#> 1 SbilanciamentoAggregatoZonale_SegnoGiornaliero_Orario_20170709
#> 2 SbilanciamentoAggregatoZonale_SegnoGiornaliero_QuartoOrario_20170709
#> X2
#> 1 /Date(1499680800000)/
#> 2 /Date(1499680800000)/
#> X3
#> 1 <div class="actions detail-inline export" data-pk="53F4A57FCB70304EE0532A889B0A7758"></div>
#> 2 <div class="actions detail-inline export" data-pk="53F4A57FCB6D304EE0532A889B0A7758"></div>
#> X4 X5 X6
#> 1 53F4A57FCB70304EE0532A889B0A7758 25 SEGNO_MACROZONALE_ORARIO
#> 2 53F4A57FCB6D304EE0532A889B0A7758 25 SEGNO_MACROZONALE_QUARTO_ORARIO
#> X7 X8
#> 1 Segno Giornaliero Orario
#> 2 Segno Giornaliero Quarto Orario
Ok, basicly I was missing the mechanism of how HTTP works. After some days of study I understood the correct approach is using httr package the way showed below.
First of all I retrieved all the settings needed from the public page:
lnkd_url <- paste0(dirname(dirname(url)),
node_link %>%
html_attr('href'))
lnkd_id <- strsplit(zs_node %>%
html_attr('data-params'), '=')[[1]][2]
Then it is possible to launch the POST request to the target page:
lnkd_page <- POST(lnkd_url,
body = list('filter.IdSezione' = lnkd_id)
That's it!
Related
I want to load the table at the bottom of the following webpage into R, either as a dataframe or table: https://www.lawschooldata.org/school/Yale%20University/18. My first instinct was to use the readHTMLTable function in the XML package
library(XML)
url <- "https://www.lawschooldata.org/school/Yale%20University/18"
##warning message after next line
table <- readHTMLTable(url)
table
However, this returns an empty list and gives me the following warning:
Warning message:XML content does not seem to be XML: ''
I also tried adapting code I found here Scraping html tables into R data frames using the XML package. This worked for 5 of the 6 tables on the page, but just returned the header row and one row with values from the header row for the 6th table, which is the one I am interested in. Code below:
library(XML)
library(RCurl)
library(rlist)
theurl <- getURL("https://www.lawschooldata.org/school/Yale%20University/18",.opts = list(ssl.verifypeer = FALSE) )
tables <- readHTMLTable(theurl)
##generates a list of the 6 tables on the page
tables <- list.clean(tables, fun = is.null, recursive = FALSE)
##takes the 6th table, which is the one I am interested in
applicanttable <- tables[[6]]
##the problem is that this 6th table returns just the header row and one row of values
##equal to those the header row
head(applicanttable)
Any insights would be greatly appreciated! For reference, I have also consulted the following posts that appear to have similar goals, but could not find a solution there:
Scraping html tables into R data frames using the XML package
Extracting html table from a website in R
The data is dynamically pulled from a nested JavaScript array, within a script tag when JavaScript runs in the browser. This doesn't happen when you use rvest to retrieve the non-rendered content (as seen in view-source).
You can regex out the appropriate nested array and then re-construct the table by splitting out the rows, adding the appropriate headers and performing some data manipulations on various columns e.g. some columns contain html which needs to be parsed to obtain the desired value.
As some columns e.g. Name contain values which could be interpreted as file paths , when using read_html, I use htmltidy to ensure handling as valid html.
N.B. If you use RSelenium then the page will render and you can just grab the table direct without reconstructing it.
TODO:
There are still some data type manipulations you could choose to apply to a few columns.
There is some more logic to be applied to ensure only Name is returned in Name column. Take the case of df$Name[10], this returns "Character and fitness issues" instead of Anxiousboy, due to the required value actually sitting in element.nextSibling.nextSibling of the p tag which is actually selected. These, infrequent, edge cases, need some additional logic built in. In this case, you might test for a particular string being returned then resort to re-parsing with an xpath expression.
R:
library(rvest)
#> Loading required package: xml2
#> Warning: package 'xml2' was built under R version 4.0.3
library(stringr)
library(htmltidy)
#> Warning: package 'htmltidy' was built under R version 4.0.3
library(dplyr)
#>
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#>
#> filter, lag
#> The following objects are masked from 'package:base':
#>
#> intersect, setdiff, setequal, union
get_value <- function(input) {
value <- tidy_html(input) %>%
read_html() %>%
html_node("a, p, span") %>%
html_text(trim = T)
result <- ifelse(is.na(value), input, value)
return(result)
}
tidy_result <- function(result) {
return(gsub("<.*", "", result))
}
page <- read_html("https://www.lawschooldata.org/school/Yale%20University/18")
s <- page %>% toString()
headers <- page %>%
html_nodes("#applicants-table th") %>%
html_text(trim = T)
s <- stringr::str_extract(s, regex("DataTable\\(\\{\n\\s+data:(.*\\n\\]\\n\\])", dotall = T)) %>%
gsub("\n", "", .)
rows <- stringr::str_extract_all(s, regex("(\\[.*?\\])", dotall = T))[[1]] %>% as.list()
df <- sapply(rows, function(x) {
stringr::str_match_all(x, "'(.*?)'")[[1]][, 2]
}) %>%
t() %>%
as_tibble(.name_repair = "unique")
#> New names:
#> * `` -> ...1
#> * `` -> ...2
#> * `` -> ...3
#> * `` -> ...4
#> * `` -> ...5
#> * ...
names(df) <- headers
df <- df %>%
rowwise() %>%
mutate(across(c("Name", "GRE", "URM", "$$$$"), .f = get_value)) %>%
mutate_at(c("Result"), tidy_result)
write.csv(df, "Yale Applications.csv")
Created on 2021-06-23 by the reprex package (v0.3.0)
Sample output:
I am trying to use rvest to scrape one page of Google Scholar search results into a dataframe of author, paper title, year, and journal title.
The simplified, reproducible example below is code that searches Google Scholar for the example terms "apex predator conservation".
Note: to stay within the Terms of Service, I only want to process the first page of search results that I would get from a manual search. I am not asking about automation to scrape additional pages.
The following code already works to extract:
author
paper title
year
but it does not have:
journal title
I would like to extract the journal title and add it to the output.
library(rvest)
library(xml2)
library(selectr)
library(stringr)
library(jsonlite)
url_name <- 'https://scholar.google.com/scholar?hl=en&as_sdt=0%2C38&q=apex+predator+conservation&btnG=&oq=apex+predator+c'
wp <- xml2::read_html(url_name)
# Extract raw data
titles <- rvest::html_text(rvest::html_nodes(wp, '.gs_rt'))
authors_years <- rvest::html_text(rvest::html_nodes(wp, '.gs_a'))
# Process data
authors <- gsub('^(.*?)\\W+-\\W+.*', '\\1', authors_years, perl = TRUE)
years <- gsub('^.*(\\d{4}).*', '\\1', authors_years, perl = TRUE)
# Make data frame
df <- data.frame(titles = titles, authors = authors, years = years, stringsAsFactors = FALSE)
df
source: https://stackoverflow.com/a/58192323/8742237
So the output of that code looks like this:
#> titles
#> 1 [HTML][HTML] Saving large carnivores, but losing the apex predator?
#> 2 Site fidelity and sex-specific migration in a mobile apex predator: implications for conservation and ecosystem dynamics
#> 3 Effects of tourism-related provisioning on the trophic signatures and movement patterns of an apex predator, the Caribbean reef shark
#> authors years
#> 1 A Ordiz, R Bischof, JE Swenson 2013
#> 2 A Barnett, KG Abrantes, JD Stevens, JM Semmens 2011
Two questions:
How can I add a column that has the journal title extracted from the raw data?
Is there a reference where I can read and learn more about how to work out how to extract other fields for myself, so I don't have to ask here?
One way to add them is this:
library(rvest)
library(xml2)
library(selectr)
library(stringr)
library(jsonlite)
url_name <- 'https://scholar.google.com/scholar?hl=en&as_sdt=0%2C38&q=apex+predator+conservation&btnG=&oq=apex+predator+c'
wp <- xml2::read_html(url_name)
# Extract raw data
titles <- rvest::html_text(rvest::html_nodes(wp, '.gs_rt'))
authors_years <- rvest::html_text(rvest::html_nodes(wp, '.gs_a'))
# Process data
authors <- gsub('^(.*?)\\W+-\\W+.*', '\\1', authors_years, perl = TRUE)
years <- gsub('^.*(\\d{4}).*', '\\1', authors_years, perl = TRUE)
leftovers <- authors_years %>%
str_remove_all(authors) %>%
str_remove_all(years)
journals <- str_split(leftovers, "-") %>%
map_chr(2) %>%
str_extract_all("[:alpha:]*") %>%
map(function(x) x[x != ""]) %>%
map(~paste(., collapse = " ")) %>%
unlist()
# Make data frame
df <- data.frame(titles = titles, authors = authors, years = years, journals = journals, stringsAsFactors = FALSE)
For your second question: the css selector gadget chrome extension is nice for getting the css selectors of the elements you want. But in your case all elements share the same css class, so the only way to disentangle them is to use regex. So I guess learn a bit about css selectors and regex :)
So I am fairly new to the topic of webscraping. I am trying to find all the hyperlinks that the html code of the following page contains:
https://www.exito.com/mercado/lacteos-huevos-y-refrigerados/leches
So this is what I tried:
url <- "https://www.exito.com/mercado/lacteos-huevos-y-refrigerados/leches"
webpage <- read_html(url)
html_attr(html_nodes(webpage, "a"), "href")
The result only contains like 6 links but just by viewing the page you can see that there are a lot more of hyperlinks.
For example the code behind the first image has something like: <a href="/leche-entera-sixpack-en-bolsa-x-11-litros-cu-807650/p" class="vtex-product-summary-2-x-clearLink h-100 flex flex-column"> ...
What am I doing wrong?
You won't be able to get the a tags you're after because that part of the website is not visible to html/xml parsers. This is because it's a dynamic part of the website that changes if you choose another part of the website; the only 'static' part of the website is the top header, which is why you only got 6 a tags: the six a tags from the header.
For this, we need to mimic the behavior of a browser (firefox, chrome, etc...), go into the website (note that we're not entering the website as an html/xml parser but as a 'user' through a browser) and read the html/xml source code from there.
For this we'll need the R package RSelenium. Make sure you install it correctly together with docker, as none of the code below can work without it.
After you install RSelenium and docker, run docker run -d -p 4445:4444 selenium/standalone-firefox:2.53.1 from your terminal (if on Linux, you can run this the terminal; if on Windows you'll have to download a docker terminal, run it there). After that you're all set to reproduce the code below.
Why you're approach didn't work
We need to access the 5th div tag from the image below:
As you can see, this 5th div tag has three dots (...) inside, denoting that there's code inside: this is precisely where all of the bottom part of the website is (including the a tags that you're after). If we tried to access this 5th tag using rvest or xml2, we won't find anything:
library(xml2)
library(dplyr)
#>
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#>
#> filter, lag
#> The following objects are masked from 'package:base':
#>
#> intersect, setdiff, setequal, union
lnk <- "https://www.exito.com/mercado/lacteos-huevos-y-refrigerados/leches?page=2"
# Note how the 5th div element is empty and it should contain the lower
# part of the website
lnk %>%
read_html() %>%
xml_find_all("//div[#class='flex flex-grow-1 w-100 flex-column']") %>%
xml_children()
#> {xml_nodeset (6)}
#> [1] <div class=""></div>\n
#> [2] <div class=""></div>\n
#> [3] <div class=""></div>\n
#> [4] <div class=""></div>\n
#> [5] <div class=""></div>\n
#> [6] <div class=""></div>
Note how the 5th div tag doesn't have any code inside. A simple html/xml parser won't catch it.
How it can work
We need to use RSelenium. After you've installed everything correctly, we need to setup a 'remote driver', open it and navigate to the website. All of these steps are just to make sure that we're coming into the website as a 'normal' user from a browser. This will make sure that we can access the rendered code that we actually see when we enter the website. Below are the detailed steps from entering the website and constructing the links.
# Make sure you install docker correctly: https://docs.ropensci.org/RSelenium/articles/docker.html
library(RSelenium)
# After installing docker and before running the code, make sure you run
# the rselenium docker image: docker run -d -p 4445:4444 selenium/standalone-firefox:2.53.1
# Now, set up your remote driver
remDr <- remoteDriver(
remoteServerAddr = "localhost",
port = 4445L,
browserName = "firefox"
)
# Initiate the driver
remDr$open(silent = TRUE)
# Navigate to the exito.com website
remDr$navigate(lnk)
prod_links <-
# Get the html source code
remDr$getPageSource()[[1]] %>%
read_html() %>%
# Find all a tags which have a certain class
# I searched for this tag manually on the website code and saw that all products
# had an a tag that shared the same class
xml_find_all("//a[#class='vtex-product-summary-2-x-clearLink h-100 flex flex-column']") %>%
# Extract the href attribute
xml_attr("href") %>%
paste0("https://www.exito.com", .)
prod_links
#> [1] "https://www.exito.com/leche-semidescremada-deslactosada-en-bolsa-x-900-ml-145711/p"
#> [2] "https://www.exito.com/leche-entera-en-bolsa-x-900-ml-145704/p"
#> [3] "https://www.exito.com/leche-entera-sixpack-x-1300-ml-cu-987433/p"
#> [4] "https://www.exito.com/leche-deslactosada-en-caja-x-1-litro-878473/p"
#> [5] "https://www.exito.com/leche-polvo-deslactos-semidesc-764522/p"
#> [6] "https://www.exito.com/leche-slight-sixpack-en-caja-x-1050-ml-cu-663528/p"
#> [7] "https://www.exito.com/leche-semidescremada-sixpack-en-caja-x-1050-ml-cu-663526/p"
#> [8] "https://www.exito.com/leche-descremada-sixpack-x-1300-ml-cu-563046/p"
#> [9] "https://www.exito.com/of-leche-deslact-pag-5-lleve-6-439057/p"
#> [10] "https://www.exito.com/sixpack-de-leche-descremada-x-1100-ml-cu-414454/p"
#> [11] "https://www.exito.com/leche-en-polvo-klim-fortificada-360g-239085/p"
#> [12] "https://www.exito.com/leche-deslactosada-descremada-en-caja-x-1-litro-238291/p"
#> [13] "https://www.exito.com/leche-deslactosada-en-caja-x-1-litro-157334/p"
#> [14] "https://www.exito.com/leche-entera-larga-vida-en-caja-x-1-litro-157332/p"
#> [15] "https://www.exito.com/leche-en-polvo-klim-fortificada-780g-138121/p"
#> [16] "https://www.exito.com/leche-entera-en-bolsa-x-1-litro-125079/p"
#> [17] "https://www.exito.com/leche-entera-en-bolsa-sixpack-x-11-litros-cu-59651/p"
#> [18] "https://www.exito.com/leche-deslactosada-descremada-sixpack-x-11-litros-cu-22049/p"
#> [19] "https://www.exito.com/leche-entera-en-polvo-instantanea-x-760-gr-835923/p"
#> [20] "https://www.exito.com/of-alpin-cja-cho-pag9-llev12/p"
Hope this answers your questions
The data, including the urls, are returned dynamically from a GraphQL query you can observe in the network tab when clicking Mostrar más on the page. This is why the content is not present in your initial query - it has not yet been requested.
XHR for the product info
The relevant XHR in the network tab of dev tools:
The actual query params of the url query string:
You can do away with most of the request info. What you do need is the extensions param. More specifically, you need to provide the sha256Hash and the base64 encoded string value associated with the variables key in the persistedQuery.
The SHA256 Hash
The appropriate hash can be extracted from at least one of the js files which essentially governs the set up. An example file you can use is:
https://exitocol.vtexassets.com/_v/public/assets/v1/published/bundle/public/react/asset.min.js?v=1&files=vtex.store-resources#0.38.0,OrderFormContext,Mutations,Queries,PWAContext&files=exitocol.store-components#0.0.2,common,11,3,SearchBar&files=vtex.responsive-values#0.2.0,common,useResponsiveValues&files=vtex.slider#0.7.3,common,0,Dots,Slide,Slider,SliderContainer&files=exito.components#4.0.7,common,0,1,3,4&workspace=master.
The query hash can be regex'd from the response text of an xhr request to this uri. The regex is explained here and the first match is sufficient:
To apply in R, with stringr, you will need some extra escapes in e.g. \s becomes \\s.
The Base64 encoded product query
The base64 encoded string you can generate yourself with the appropriate library e.g. it seems there is a base64encode R function in caTools package.
The encoded string looks like (depending on page/result batch):
eyJ3aXRoRmFjZXRzIjpmYWxzZSwiaGlkZVVuYXZhaWxhYmxlSXRlbXMiOmZhbHNlLCJza3VzRmlsdGVyIjoiQUxMX0FWQUlMQUJMRSIsInF1ZXJ5IjoiMTQ4IiwibWFwIjoicHJvZHVjdENsdXN0ZXJJZHMiLCJvcmRlckJ5IjoiT3JkZXJCeVRvcFNhbGVERVNDIiwiZnJvbSI6MjAsInRvIjozOX0=
Decoded:
{"withFacets":false,"hideUnavailableItems":false,"skusFilter":"ALL_AVAILABLE","query":"148","map":"productClusterIds","orderBy":"OrderByTopSaleDESC","from":20,"to":39}
The from and to params are the offsets for the results batches of products which come in batches of twenty. So, you can write functions which return the appropriate sha256 hash and send a subsequent request for product info where you base64 encode, with the appropriate library, the string above and alter the from and to params as required. Potentially others as well (have a play!).
The xhr response:
The response is json so you might need a json library (e.g. jsonlite) to handle the result (UPDATE: Seems you don't with R and httr). You can extract the links from a list of dictionaries nested within result['data']['products'], as per Python example, where result is the json object retrieved from the xhr with from and to params.
Examples:
Examples using R and Python are shown below (N.B. I am less familiar with R). The above has been kept fairly language agnostic.
Bear in mind, whilst I am extracting the urls, the json returned has a lot more info including product title, price, image info etc.
Example output:
TODO:
Add in error handling
Use Session objects to benefit from re-use of underlying tcp connection especially if making multiple requests to get all products
Add in functionality to return total product number and loop structure to retrieve all (Python example might benefit from decorator)
R (a quick first go):
library(purrr)
library(stringr)
library(caTools)
library(httr)
get_links <- function(sha, start, end){
string = paste0('{"withFacets":false,"hideUnavailableItems":false,"skusFilter":"ALL_AVAILABLE","query":"148","map":"productClusterIds","orderBy":"OrderByTopSaleDESC","from":' , start , ',"to":' , end , '}')
base64encoded <- caTools::base64encode(string)
params = list(
'extensions' = paste0('{"persistedQuery":{"version":1,"sha256Hash":"' , sha , '","sender":"vtex.store-resources#0.x","provider":"vtex.search-graphql#0.x"},"variables":"' , base64encoded , '"}')
)
product_info <- content(httr::GET(url = 'https://www.exito.com/_v/segment/graphql/v1', query = params))$data$products
links <- map(product_info, ~{
.x %>% .$link
})
return(links)
}
start <- '0'
end <- '19'
sha <- httr::GET('https://exitocol.vtexassets.com/_v/public/assets/v1/published/bundle/public/react/asset.min.js?v=1&files=vtex.store-resources#0.38.0,OrderFormContext,Mutations,Queries,PWAContext&files=exitocol.store-components#0.0.2,common,11,3,SearchBar&files=vtex.responsive-values#0.2.0,common,useResponsiveValues&files=vtex.slider#0.7.3,common,0,Dots,Slide,Slider,SliderContainer&files=exito.components#4.0.7,common,0,1,3,4&workspace=master') %>%
content(., as = "text")%>% str_match(.,'query\\s+productSearch.*?hash:\\s+"(.*?)"')%>% .[[2]]
links <- get_links(sha, start, end)
print(links)
Py:
import requests, base64, re, json
def get_sha():
r = requests.get('https://exitocol.vtexassets.com/_v/public/assets/v1/published/bundle/public/react/asset.min.js?v=1&files=vtex.store-resources#0.38.0,OrderFormContext,Mutations,Queries,PWAContext&files=exitocol.store-components#0.0.2,common,11,3,SearchBar&files=vtex.responsive-values#0.2.0,common,useResponsiveValues&files=vtex.slider#0.7.3,common,0,Dots,Slide,Slider,SliderContainer&files=exito.components#4.0.7,common,0,1,3,4&workspace=master')
p = re.compile(r'query\s+productSearch.*?hash:\s+"(.*?)"') #https://regex101.com/r/VdC27H/5
sha = p.findall(r.text)[0]
return sha
def get_json(sha, start, end):
#these 'from' and 'to' values correspond with page # as pages cover batches of 20 e.g. start 20 end 39
string = '{"withFacets":false,"hideUnavailableItems":false,"skusFilter":"ALL_AVAILABLE","query":"148","map":"productClusterIds","orderBy":"OrderByTopSaleDESC","from":' + start + ',"to":' + end + '}'
base64encoded = base64.b64encode(string.encode('utf-8')).decode()
params = (('extensions', '{"persistedQuery":{"sha256Hash":"' + sha + '","sender":"vtex.store-resources#0.x","provider":"vtex.search-graphql#0.x"},"variables":"' + base64encoded + '"}'),)
r = requests.get('https://www.exito.com/_v/segment/graphql/v1',params=params)
return r.json()
def get_links(sha, start, end):
result = get_json(sha, start, end)
links = [i['link'] for i in result['data']['products']]
return links
sha = get_sha()
links = get_links(sha, '0', '19')
#print(len(links))
print(links)
I try to using rvest package to scrape a table:
library(rvest)
x <- read_html ("http://www.jcb.jp/rate/usd04182016.html")
x %>% html_node(".CSVTable") %>% html_table
Url elements look likes:
<table class="CSVTable">
<tbody>...</tbody>
<tbody class>...</tbody>
</table>
Why I occur the error "No matches"?
You're in luck (kind of). The site uses dynamic XHR requests to make that table, but said request is also a CSV file.
library(rvest)
library(stringr)
pg <- read_html("http://www.jcb.jp/rate/usd04182016.html")
# the <script> tag that does the dynamic loading is in position 6 of the
# list of <script> tags
fil <- str_match(html_text(html_nodes(pg, "script")[6]), "(/uploads/[[:digit:]]+\\.csv)")[,2]
df <- read.csv(sprintf("http://www.jcb.jp%s", fil), header=FALSE, stringsAsFactors=FALSE)
df <- setNames(df[,3:6], c("buy", "mid", "sell", "symbol"))
head(df)
## buy mid sell symbol
## 1 3.6735 3.6736 3.6737 AED
## 2 68.2700 69.0700 69.8700 AFN
## 3 122.3300 122.6300 122.9300 ALL
## 4 479.5000 481.0000 482.5000 AMD
## 5 1.7710 1.8110 1.8510 ANG
## 6 165.0600 165.3100 165.5600 AOA
But, that also means you can just get the CSV directly:
read.csv("http://www.jcb.jp/uploads/20160418.csv")
(just format the date properly in your requests).
I am trying to scrape this table from this website using RCurl. I am able to do this and put it into a nice dataframe using the code:
clinVar <- getURL("http://www.ncbi.nlm.nih.gov/clinvar/?term=BRCA1")
docForm2 <- htmlTreeParse(clinVar,useInternalNodes = T)
xp_expr = "//table[#class= 'jig-ncbigrid docsum_table\']/tbody/tr"
nodes = getNodeSet(docForm2, xp_expr)
extractedData <- xmlToDataFrame(nodes)
colnames(extractedData) <- c("Info","Gene", "Variation","Freq", "Phenotype","Clinical significance","Status", "Chr","Location")
However, I can only extract the data on the first page, and the table spans multiple pages. How do you access data on the next page? I have looked at the HTML code for the website and the region that the "Next" button exists in is here (I believe!):
<a name="EntrezSystem2.PEntrez.clinVar.clinVar_Entrez_ResultsPanel.Entrez_Pager.Page" title="Next page of results" class="active page_link next" href="#" sid="3" page="3" accesskey="k" id="EntrezSystem2.PEntrez.clinVar.clinVar_Entrez_ResultsPanel.Entrez_Pager.Page">Next ></a>
I would like to know how to access this link using getURL, postForm etc. I think I should be doing something like this, to get data from the second page but it's still just giving me the first page:
url <- "http://www.ncbi.nlm.nih.gov/clinvar/?term=BRCA1"
clinVar <- postForm(url,
"EntrezSystem2.PEntrez.clinVar.clinVar_Entrez_ResultsPanel.Entrez_Pager.cPage" ="2")
docForm2 <- htmlTreeParse(clinVar,useInternalNodes = T)
xp_expr = "//table[#class= 'jig-ncbigrid docsum_table\']/tbody/tr"
nodes = getNodeSet(docForm2, xp_expr)
extractedData <- xmlToDataFrame(nodes)
colnames(extractedData) <- c("Info","Gene", "Variation","Freq","Phenotype","Clinical significance","Status", "Chr","Location")
Thanks to anyone who can help.
I would use E-utilities to access data at NCBI instead.
url <- "http://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?db=clinvar&term=brca1"
readLines(url)
[1] "<?xml version=\"1.0\" ?>"
[2] "<!DOCTYPE eSearchResult PUBLIC \"-//NLM//DTD eSearchResult, 11 May 2002//EN\" \"http://www.ncbi.nlm.nih.gov/entrez/query/DTD/eSearch_020511.dtd\">"
[3] "<eSearchResult><Count>1080</Count><RetMax>20</RetMax><RetStart>0</RetStart><QueryKey>1</QueryKey><WebEnv>NCID_1_36649974_130.14.18.34_9001_1386348760_356908530</WebEnv><IdList>"
Pass the QueryKey and WebEnv to esummary and get the XML summary (this changes with each esearch, so copy and paste the new keys into the url below)
url2 <- "http://eutils.ncbi.nlm.nih.gov/entrez/eutils/esummary.fcgi?db=clinvar&query_key=1&WebEnv=NCID_1_36649974_130.14.18.34_9001_1386348760_356908530"
brca1 <- xmlParse(url2)
Next, view a single record and then extract the fields you need. You may need to loop through the set if there are 0 to many values assigned to a tag. Others like clinical significance description always have 1 value.
getNodeSet(brca1, "//DocumentSummary")[[1]]
table(xpathSApply(brca1, "//clinical_significance/description", xmlValue) )
Benign conflicting data from submitters not provided other
129 22 6 1
Pathogenic probably not pathogenic probably pathogenic risk factor
508 68 19 43
Uncertain significance
284
Also, there are many packages with E-utilities on github and BioC (rentrez, reutils, genomes and others). Using the genomes package on BioC, this simplifies to
brca1 <- esummary( esearch("brca1", db="clinvar"), parse=FALSE )
Using the e-utilities feature on the NCBI database, see http://www.ncbi.nlm.nih.gov/books/NBK25500/ for more details.
## use eSearch feature in eUtilities to search NCBI for ids corresponding to each row of data.
## note to see all ids, not not just top 10 set retmax to a high number
## to get query id and web env info, set usehistory=y
library(RCurl)
library(XML)
baseSearch <- ("http://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?db=") ## eSearch
db <- "clinvar" ## database to query
gene <- "BRCA1" ## gene of interest
query <- paste('[gene]+AND+"','clinsig pathogenic"','[Properties]+AND+"','single nucleotide variant"','[Type of variation]&usehistory=y&retmax=1110',sep="") ## query, see below for details
baseFetch <- "http://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=" ## base fetch
searchURL <- paste(baseSearch,db, "&term=",gene,query,sep="")
getSearch <- getURL(searchURL)
searchHTML <- htmlTreeParse(searchURL, useInternalNodes =T)
nodes <- getNodeSet(searchHTML,"//querykey") ## this name "querykey" was extracted from the HTML source code for this page
querykey <- xmlToDataFrame(nodes)
nodes <- getNodeSet(searchHTML,"//webenv") ## this name "webenv" was extracted from the HTML source code for this page
webenv <- xmlToDataFrame(nodes)
fetchURL <- paste(baseFetch,db,"&query_key=",querykey,"&WebEnv=",webenv[[1]],"&rettype=docsum",sep="")
getFetch <- getURL(fetchURL)
fetchHTML <- htmlTreeParse(getFetch, useInternalNodes =T)
nodes <- getNodeSet(fetchHTML, "//position")
extractedDataAll <- xmlToDataFrame(nodes)
colnames(extractedDataAll) <- c("pathogenicSNPs")
print(extractedDataAll)
Please note, I found the query information by going to http://www.ncbi.nlm.nih.gov/clinvar/?term=BRCA1 selecting my filters (pathogenic, etc) and then clicking the advanced button. The most recent filters applied should come up in the main box, I used this for the query.
ClinVar now offers XML download of the whole database so webscraping is not necessary.