Persisting HTML documents to disk - html

I am trying to save about 300 HTML objects to disk using R.
str_url <- "https://www.holidayhouses.co.nz/Browse/List.aspx?page=1"
read_html_test1 <- xml2::read_html(str_url)
xml2::write_xml(read_html_test1, "testwrite.html")
read_html <- xml2::read_html("testwrite.html")
But this will eventually save about 300 separate files to disk. Ideally, what I would like is to save a single R object to disk that contains these 300 documents.
Converting each document to text before saving for some reason does not work. For example the following will product some weird (unhelpful) error:
str_html <- as.character(read_html_test1)
xml2::read_html(str_html)
If I try to use the output of xml2::read_html() it is a a pointer to a C structure and therefore this will not persist to disk.
Any suggestions for a hack to make this work...?

I managed it with the httr package, whose content function can take an as = "text" argument, which stops it from parsing the HTML.
library(xml2)
library(httr)
str_url <- "https://www.holidayhouses.co.nz/Browse/List.aspx?page=1"
# use `GET` to make the request, and pull out the html with `content`; returns text string
x <- content(GET(str_url), as = 'text')
# make a list of html documents to save
list_xs <- list(x, x)
# save list with `saveRDS`
saveRDS(list_xs, 'test.rds')
Now to see if it works:
# read in rds file we saved
saved_html <- readRDS('test.rds')
# parse the second element in it with `xml2::read_html`
saved_x_parsed <- read_html(saved_html[[2]])
# and let's see...
saved_x_parsed
# {xml_document}
# <html>
# [1] <head><title>
\n\tNew Zealand holiday homes, baches and vacation homes for rent.
\ ...
# [2] <body id="ctl00_Body" class="Page-List">
\n <div class="SatNavBarPlaceholder"/>&#13 ...

How to save R objects to disk:
Save R Objects
I took your example code and produced working, human readable, R-loadable output as follows:
str_url <- "https://www.holidayhouses.co.nz/Browse/List.aspx?page=1"
read_html_test1 <- xml2::read_html(str_url)
str_html <- as.character(read_html_test1)
x <- xml2::read_html(str_html)
save(x, file="c:\\temp\\text.txt",compress=FALSE,ascii=TRUE)

Related

Trying to find hyperlinks by scraping

So I am fairly new to the topic of webscraping. I am trying to find all the hyperlinks that the html code of the following page contains:
https://www.exito.com/mercado/lacteos-huevos-y-refrigerados/leches
So this is what I tried:
url <- "https://www.exito.com/mercado/lacteos-huevos-y-refrigerados/leches"
webpage <- read_html(url)
html_attr(html_nodes(webpage, "a"), "href")
The result only contains like 6 links but just by viewing the page you can see that there are a lot more of hyperlinks.
For example the code behind the first image has something like: <a href="/leche-entera-sixpack-en-bolsa-x-11-litros-cu-807650/p" class="vtex-product-summary-2-x-clearLink h-100 flex flex-column"> ...
What am I doing wrong?
You won't be able to get the a tags you're after because that part of the website is not visible to html/xml parsers. This is because it's a dynamic part of the website that changes if you choose another part of the website; the only 'static' part of the website is the top header, which is why you only got 6 a tags: the six a tags from the header.
For this, we need to mimic the behavior of a browser (firefox, chrome, etc...), go into the website (note that we're not entering the website as an html/xml parser but as a 'user' through a browser) and read the html/xml source code from there.
For this we'll need the R package RSelenium. Make sure you install it correctly together with docker, as none of the code below can work without it.
After you install RSelenium and docker, run docker run -d -p 4445:4444 selenium/standalone-firefox:2.53.1 from your terminal (if on Linux, you can run this the terminal; if on Windows you'll have to download a docker terminal, run it there). After that you're all set to reproduce the code below.
Why you're approach didn't work
We need to access the 5th div tag from the image below:
As you can see, this 5th div tag has three dots (...) inside, denoting that there's code inside: this is precisely where all of the bottom part of the website is (including the a tags that you're after). If we tried to access this 5th tag using rvest or xml2, we won't find anything:
library(xml2)
library(dplyr)
#>
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#>
#> filter, lag
#> The following objects are masked from 'package:base':
#>
#> intersect, setdiff, setequal, union
lnk <- "https://www.exito.com/mercado/lacteos-huevos-y-refrigerados/leches?page=2"
# Note how the 5th div element is empty and it should contain the lower
# part of the website
lnk %>%
read_html() %>%
xml_find_all("//div[#class='flex flex-grow-1 w-100 flex-column']") %>%
xml_children()
#> {xml_nodeset (6)}
#> [1] <div class=""></div>\n
#> [2] <div class=""></div>\n
#> [3] <div class=""></div>\n
#> [4] <div class=""></div>\n
#> [5] <div class=""></div>\n
#> [6] <div class=""></div>
Note how the 5th div tag doesn't have any code inside. A simple html/xml parser won't catch it.
How it can work
We need to use RSelenium. After you've installed everything correctly, we need to setup a 'remote driver', open it and navigate to the website. All of these steps are just to make sure that we're coming into the website as a 'normal' user from a browser. This will make sure that we can access the rendered code that we actually see when we enter the website. Below are the detailed steps from entering the website and constructing the links.
# Make sure you install docker correctly: https://docs.ropensci.org/RSelenium/articles/docker.html
library(RSelenium)
# After installing docker and before running the code, make sure you run
# the rselenium docker image: docker run -d -p 4445:4444 selenium/standalone-firefox:2.53.1
# Now, set up your remote driver
remDr <- remoteDriver(
remoteServerAddr = "localhost",
port = 4445L,
browserName = "firefox"
)
# Initiate the driver
remDr$open(silent = TRUE)
# Navigate to the exito.com website
remDr$navigate(lnk)
prod_links <-
# Get the html source code
remDr$getPageSource()[[1]] %>%
read_html() %>%
# Find all a tags which have a certain class
# I searched for this tag manually on the website code and saw that all products
# had an a tag that shared the same class
xml_find_all("//a[#class='vtex-product-summary-2-x-clearLink h-100 flex flex-column']") %>%
# Extract the href attribute
xml_attr("href") %>%
paste0("https://www.exito.com", .)
prod_links
#> [1] "https://www.exito.com/leche-semidescremada-deslactosada-en-bolsa-x-900-ml-145711/p"
#> [2] "https://www.exito.com/leche-entera-en-bolsa-x-900-ml-145704/p"
#> [3] "https://www.exito.com/leche-entera-sixpack-x-1300-ml-cu-987433/p"
#> [4] "https://www.exito.com/leche-deslactosada-en-caja-x-1-litro-878473/p"
#> [5] "https://www.exito.com/leche-polvo-deslactos-semidesc-764522/p"
#> [6] "https://www.exito.com/leche-slight-sixpack-en-caja-x-1050-ml-cu-663528/p"
#> [7] "https://www.exito.com/leche-semidescremada-sixpack-en-caja-x-1050-ml-cu-663526/p"
#> [8] "https://www.exito.com/leche-descremada-sixpack-x-1300-ml-cu-563046/p"
#> [9] "https://www.exito.com/of-leche-deslact-pag-5-lleve-6-439057/p"
#> [10] "https://www.exito.com/sixpack-de-leche-descremada-x-1100-ml-cu-414454/p"
#> [11] "https://www.exito.com/leche-en-polvo-klim-fortificada-360g-239085/p"
#> [12] "https://www.exito.com/leche-deslactosada-descremada-en-caja-x-1-litro-238291/p"
#> [13] "https://www.exito.com/leche-deslactosada-en-caja-x-1-litro-157334/p"
#> [14] "https://www.exito.com/leche-entera-larga-vida-en-caja-x-1-litro-157332/p"
#> [15] "https://www.exito.com/leche-en-polvo-klim-fortificada-780g-138121/p"
#> [16] "https://www.exito.com/leche-entera-en-bolsa-x-1-litro-125079/p"
#> [17] "https://www.exito.com/leche-entera-en-bolsa-sixpack-x-11-litros-cu-59651/p"
#> [18] "https://www.exito.com/leche-deslactosada-descremada-sixpack-x-11-litros-cu-22049/p"
#> [19] "https://www.exito.com/leche-entera-en-polvo-instantanea-x-760-gr-835923/p"
#> [20] "https://www.exito.com/of-alpin-cja-cho-pag9-llev12/p"
Hope this answers your questions
The data, including the urls, are returned dynamically from a GraphQL query you can observe in the network tab when clicking Mostrar más on the page. This is why the content is not present in your initial query - it has not yet been requested.
XHR for the product info
The relevant XHR in the network tab of dev tools:
The actual query params of the url query string:
You can do away with most of the request info. What you do need is the extensions param. More specifically, you need to provide the sha256Hash and the base64 encoded string value associated with the variables key in the persistedQuery.
The SHA256 Hash
The appropriate hash can be extracted from at least one of the js files which essentially governs the set up. An example file you can use is:
https://exitocol.vtexassets.com/_v/public/assets/v1/published/bundle/public/react/asset.min.js?v=1&files=vtex.store-resources#0.38.0,OrderFormContext,Mutations,Queries,PWAContext&files=exitocol.store-components#0.0.2,common,11,3,SearchBar&files=vtex.responsive-values#0.2.0,common,useResponsiveValues&files=vtex.slider#0.7.3,common,0,Dots,Slide,Slider,SliderContainer&files=exito.components#4.0.7,common,0,1,3,4&workspace=master.
The query hash can be regex'd from the response text of an xhr request to this uri. The regex is explained here and the first match is sufficient:
To apply in R, with stringr, you will need some extra escapes in e.g. \s becomes \\s.
The Base64 encoded product query
The base64 encoded string you can generate yourself with the appropriate library e.g. it seems there is a base64encode R function in caTools package.
The encoded string looks like (depending on page/result batch):
eyJ3aXRoRmFjZXRzIjpmYWxzZSwiaGlkZVVuYXZhaWxhYmxlSXRlbXMiOmZhbHNlLCJza3VzRmlsdGVyIjoiQUxMX0FWQUlMQUJMRSIsInF1ZXJ5IjoiMTQ4IiwibWFwIjoicHJvZHVjdENsdXN0ZXJJZHMiLCJvcmRlckJ5IjoiT3JkZXJCeVRvcFNhbGVERVNDIiwiZnJvbSI6MjAsInRvIjozOX0=
Decoded:
{"withFacets":false,"hideUnavailableItems":false,"skusFilter":"ALL_AVAILABLE","query":"148","map":"productClusterIds","orderBy":"OrderByTopSaleDESC","from":20,"to":39}
The from and to params are the offsets for the results batches of products which come in batches of twenty. So, you can write functions which return the appropriate sha256 hash and send a subsequent request for product info where you base64 encode, with the appropriate library, the string above and alter the from and to params as required. Potentially others as well (have a play!).
The xhr response:
The response is json so you might need a json library (e.g. jsonlite) to handle the result (UPDATE: Seems you don't with R and httr). You can extract the links from a list of dictionaries nested within result['data']['products'], as per Python example, where result is the json object retrieved from the xhr with from and to params.
Examples:
Examples using R and Python are shown below (N.B. I am less familiar with R). The above has been kept fairly language agnostic.
Bear in mind, whilst I am extracting the urls, the json returned has a lot more info including product title, price, image info etc.
Example output:
TODO:
Add in error handling
Use Session objects to benefit from re-use of underlying tcp connection especially if making multiple requests to get all products
Add in functionality to return total product number and loop structure to retrieve all (Python example might benefit from decorator)
R (a quick first go):
library(purrr)
library(stringr)
library(caTools)
library(httr)
get_links <- function(sha, start, end){
string = paste0('{"withFacets":false,"hideUnavailableItems":false,"skusFilter":"ALL_AVAILABLE","query":"148","map":"productClusterIds","orderBy":"OrderByTopSaleDESC","from":' , start , ',"to":' , end , '}')
base64encoded <- caTools::base64encode(string)
params = list(
'extensions' = paste0('{"persistedQuery":{"version":1,"sha256Hash":"' , sha , '","sender":"vtex.store-resources#0.x","provider":"vtex.search-graphql#0.x"},"variables":"' , base64encoded , '"}')
)
product_info <- content(httr::GET(url = 'https://www.exito.com/_v/segment/graphql/v1', query = params))$data$products
links <- map(product_info, ~{
.x %>% .$link
})
return(links)
}
start <- '0'
end <- '19'
sha <- httr::GET('https://exitocol.vtexassets.com/_v/public/assets/v1/published/bundle/public/react/asset.min.js?v=1&files=vtex.store-resources#0.38.0,OrderFormContext,Mutations,Queries,PWAContext&files=exitocol.store-components#0.0.2,common,11,3,SearchBar&files=vtex.responsive-values#0.2.0,common,useResponsiveValues&files=vtex.slider#0.7.3,common,0,Dots,Slide,Slider,SliderContainer&files=exito.components#4.0.7,common,0,1,3,4&workspace=master') %>%
content(., as = "text")%>% str_match(.,'query\\s+productSearch.*?hash:\\s+"(.*?)"')%>% .[[2]]
links <- get_links(sha, start, end)
print(links)
Py:
import requests, base64, re, json
def get_sha():
r = requests.get('https://exitocol.vtexassets.com/_v/public/assets/v1/published/bundle/public/react/asset.min.js?v=1&files=vtex.store-resources#0.38.0,OrderFormContext,Mutations,Queries,PWAContext&files=exitocol.store-components#0.0.2,common,11,3,SearchBar&files=vtex.responsive-values#0.2.0,common,useResponsiveValues&files=vtex.slider#0.7.3,common,0,Dots,Slide,Slider,SliderContainer&files=exito.components#4.0.7,common,0,1,3,4&workspace=master')
p = re.compile(r'query\s+productSearch.*?hash:\s+"(.*?)"') #https://regex101.com/r/VdC27H/5
sha = p.findall(r.text)[0]
return sha
def get_json(sha, start, end):
#these 'from' and 'to' values correspond with page # as pages cover batches of 20 e.g. start 20 end 39
string = '{"withFacets":false,"hideUnavailableItems":false,"skusFilter":"ALL_AVAILABLE","query":"148","map":"productClusterIds","orderBy":"OrderByTopSaleDESC","from":' + start + ',"to":' + end + '}'
base64encoded = base64.b64encode(string.encode('utf-8')).decode()
params = (('extensions', '{"persistedQuery":{"sha256Hash":"' + sha + '","sender":"vtex.store-resources#0.x","provider":"vtex.search-graphql#0.x"},"variables":"' + base64encoded + '"}'),)
r = requests.get('https://www.exito.com/_v/segment/graphql/v1',params=params)
return r.json()
def get_links(sha, start, end):
result = get_json(sha, start, end)
links = [i['link'] for i in result['data']['products']]
return links
sha = get_sha()
links = get_links(sha, '0', '19')
#print(len(links))
print(links)

json parsing in r

I am loading a heavily nested JSON in R -- the seed data on league of legends games. Thanks to another question I was able to open and get a flat data frame (100 x 14167).
library(json)
library(plyr)
data.json <- fromJSON(file = "data/matches1.json")
data.unlist <- lapply(data.json$matches, unlist)
funct <- function(x){
do.call("data.frame", as.list(x))
}
data.match <- rbind.fill(lapply(data.unlist, funct)) # takes ~15 min
data.frame <- as.data.frame(data.match)
However, most columns have the wrong type, and I run into anomalies when converting. Is there a way of converting the columns automatically to characters/factors or numerics? Or is this wishful thinking? :)

Cleaning Google search results in R

Brand new (like today) to both R and scraping, stackoverflow and tbh writing any sort of code so be gentle please.
I've managed to get a search to return an array (results) with all URLs from a Google search results page:
require(XML)
require(stringr)
xPath <- "//h3//a[#href]"
html <- getURL("http://google.com/search?q=site%3AneatlyformedpartofURL.com+somekeyword") # read in page contents
doc <- htmlParse(html) # parse HTML into tree structure
nodes <- xpathApply(doc, xPath, xmlAttrs) # extract url nodes using XPath.
results <- sapply(nodes, function(x) x[[1]]) # extract urls
free(doc) # free doc from memory
results
[1] "/url?q=http://www.neatlyformedpartofURL.com/some-page-ref1/&sa=U&ei=iSr2U-KhA4LH7AaLy4Ao&ved=0CBQQFjAA&usg=AFQjCNFTW0cOKDsALw_3I8g7e-q_6kTJ6g"
[2] "/url?q=http://www.neatlyformedpartofURL.com/some-page-ref2/&sa=U&ei=iSr2U-KhA4LH7AaLy4Ao&ved=0CBsQFjAB&usg=AFQjCNHtz7hGnkBlApSYLFgRr_baSTWldw"
BUT each result has junk before and after the actual URL. I have also managed to strip all the gubbins using;
l1 <- unlist(strsplit(results, split='?q=', fixed=TRUE))[2] # strip everything before the http://
l2 <- unlist(strsplit(l1[2], split='/&sa', fixed=TRUE))[1] # strip everything added by google after the url
Which will return:
[1] http://www.neatlyformedpartofURL.com/some-page-ref1
But that's it. It looks to me like the unlist(strsplit... is only actioning on the first result from the results array. I have a suspicion it may involve sapply but can anyone help me with the code to strip all the gubbins from all results in the array?
Ideally I should end up with...
[1] http://www.neatlyformedpartofURL.com/some-page-ref1
[2] http://www.neatlyformedpartofURL.com/some-page-ref2
Thanks awfully.
No need in multiple strsplits or sapply, just try the vectorized gsub
gsub("(/url[?]q=)|(/&sa.*)", "", results)
## [1] "http://www.neatlyformedpartofURL.com/some-page-ref1"
## [2] "http://www.neatlyformedpartofURL.com/some-page-ref2"
Or, you could
library(stringr)
str_extract(results, perl('(?<=\\=).*(?=\\/)'))
#[1] "http://www.neatlyformedpartofURL.com/some-page-ref1"
#[2] "http://www.neatlyformedpartofURL.com/some-page-ref2"

File compression for and storing of HTML content

For HTML content retrieved via R, I wonder what (other) options I have with respect to either
file compression (maximum compression rate / minimum file size; the time it takes to compress is of secondary importance) when saving the content to disk
most efficiently storing the content (by whatever means, OS filesystem or DBMS)
My current findings are that gzfile offers the best compression rate in R. Can I do better? For example, I tried getting rid of unncessary space in the HTML code before saving, but seems like gzfile already takes care of that as I don't end up with smaller file sizes in comparison.
Extended curiosity question:
How do search engines handle this problem? Or are they throwing away the code as soon as it has been indexed and thus something like this is not relevant for them?
Illustration
Getting example HTML code:
url_current <- "http://cran.at.r-project.org/web/packages/available_packages_by_name.html"
html <- readLines(url(url_current))
Saving to disk:
path_txt <- file.path(tempdir(), "test.txt")
path_gz <- gsub("\\.txt$", ".gz", path_txt)
path_rdata <- gsub("\\.txt$", ".rdata", path_txt)
path_rdata_2 <- gsub("\\.txt$", "_raw.rdata", path_txt)
write(html, file=path_txt)
write(html, file=gzfile(path_gz, "w"))
save(html, file=path_rdata)
html_raw <- charToRaw(paste(html, collapse="\n"))
save(html_raw, file=path_rdata_2)
Trying to remove unncessary whitespace:
html_2 <- gsub("(>)\\s*(<)", "\\1\\2",html)
path_gz_2 <- gsub("\\.txt$", "_2.gz", path_txt)
write(html_2, gzfile(path_gz_2, "w"))
html_2 <- gsub("\\n", "", html_2)
path_gz_3 <- gsub("\\.txt$", "_3.gz", path_txt)
write(html_2, gzfile(path_gz_3, "w"))
Resulting file sizes:
files <- list.files(dirname(path_txt), full.names=TRUE)
fsizes <- file.info(files)$size
names(fsizes) <- sapply(files, basename)
> fsizes
test.gz test.rdata test.txt test_2.gz test_3.gz
164529 183818 849647 164529 164529
test_raw.rdata
164608
Checking validity of processed HTML code:
require("XML")
html_parsed <- htmlParse(html)
> xpathSApply(html_parsed, "//a[. = 'devtools']", xmlAttrs)
href
"../../web/packages/devtools/index.html"
## >> Valid HTML
html_2_parsed <- htmlParse(readLines(gzfile(path_gz_2)))
> xpathSApply(html_2_parsed, "//a[. = 'devtools']", xmlAttrs)
href
"../../web/packages/devtools/index.html"
## >> Valid HTML
html_3_parsed <- htmlParse(readLines(gzfile(path_gz_3)))
> xpathSApply(html_3_parsed, "//a[. = 'devtools']", xmlAttrs)
href
"../../web/packages/devtools/index.html"
## >> Valid HTML
html_2 <- gsub(">\\s*<", "", html)
strips away the > and <
Instead try:
html_2 <- gsub("(>)\\s*(<)", "\\1\\2",html)

How to click links onto the next page using RCurl?

I am trying to scrape this table from this website using RCurl. I am able to do this and put it into a nice dataframe using the code:
clinVar <- getURL("http://www.ncbi.nlm.nih.gov/clinvar/?term=BRCA1")
docForm2 <- htmlTreeParse(clinVar,useInternalNodes = T)
xp_expr = "//table[#class= 'jig-ncbigrid docsum_table\']/tbody/tr"
nodes = getNodeSet(docForm2, xp_expr)
extractedData <- xmlToDataFrame(nodes)
colnames(extractedData) <- c("Info","Gene", "Variation","Freq", "Phenotype","Clinical significance","Status", "Chr","Location")
However, I can only extract the data on the first page, and the table spans multiple pages. How do you access data on the next page? I have looked at the HTML code for the website and the region that the "Next" button exists in is here (I believe!):
<a name="EntrezSystem2.PEntrez.clinVar.clinVar_Entrez_ResultsPanel.Entrez_Pager.Page" title="Next page of results" class="active page_link next" href="#" sid="3" page="3" accesskey="k" id="EntrezSystem2.PEntrez.clinVar.clinVar_Entrez_ResultsPanel.Entrez_Pager.Page">Next ></a>
I would like to know how to access this link using getURL, postForm etc. I think I should be doing something like this, to get data from the second page but it's still just giving me the first page:
url <- "http://www.ncbi.nlm.nih.gov/clinvar/?term=BRCA1"
clinVar <- postForm(url,
"EntrezSystem2.PEntrez.clinVar.clinVar_Entrez_ResultsPanel.Entrez_Pager.cPage" ="2")
docForm2 <- htmlTreeParse(clinVar,useInternalNodes = T)
xp_expr = "//table[#class= 'jig-ncbigrid docsum_table\']/tbody/tr"
nodes = getNodeSet(docForm2, xp_expr)
extractedData <- xmlToDataFrame(nodes)
colnames(extractedData) <- c("Info","Gene", "Variation","Freq","Phenotype","Clinical significance","Status", "Chr","Location")
Thanks to anyone who can help.
I would use E-utilities to access data at NCBI instead.
url <- "http://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?db=clinvar&term=brca1"
readLines(url)
[1] "<?xml version=\"1.0\" ?>"
[2] "<!DOCTYPE eSearchResult PUBLIC \"-//NLM//DTD eSearchResult, 11 May 2002//EN\" \"http://www.ncbi.nlm.nih.gov/entrez/query/DTD/eSearch_020511.dtd\">"
[3] "<eSearchResult><Count>1080</Count><RetMax>20</RetMax><RetStart>0</RetStart><QueryKey>1</QueryKey><WebEnv>NCID_1_36649974_130.14.18.34_9001_1386348760_356908530</WebEnv><IdList>"
Pass the QueryKey and WebEnv to esummary and get the XML summary (this changes with each esearch, so copy and paste the new keys into the url below)
url2 <- "http://eutils.ncbi.nlm.nih.gov/entrez/eutils/esummary.fcgi?db=clinvar&query_key=1&WebEnv=NCID_1_36649974_130.14.18.34_9001_1386348760_356908530"
brca1 <- xmlParse(url2)
Next, view a single record and then extract the fields you need. You may need to loop through the set if there are 0 to many values assigned to a tag. Others like clinical significance description always have 1 value.
getNodeSet(brca1, "//DocumentSummary")[[1]]
table(xpathSApply(brca1, "//clinical_significance/description", xmlValue) )
Benign conflicting data from submitters not provided other
129 22 6 1
Pathogenic probably not pathogenic probably pathogenic risk factor
508 68 19 43
Uncertain significance
284
Also, there are many packages with E-utilities on github and BioC (rentrez, reutils, genomes and others). Using the genomes package on BioC, this simplifies to
brca1 <- esummary( esearch("brca1", db="clinvar"), parse=FALSE )
Using the e-utilities feature on the NCBI database, see http://www.ncbi.nlm.nih.gov/books/NBK25500/ for more details.
## use eSearch feature in eUtilities to search NCBI for ids corresponding to each row of data.
## note to see all ids, not not just top 10 set retmax to a high number
## to get query id and web env info, set usehistory=y
library(RCurl)
library(XML)
baseSearch <- ("http://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?db=") ## eSearch
db <- "clinvar" ## database to query
gene <- "BRCA1" ## gene of interest
query <- paste('[gene]+AND+"','clinsig pathogenic"','[Properties]+AND+"','single nucleotide variant"','[Type of variation]&usehistory=y&retmax=1110',sep="") ## query, see below for details
baseFetch <- "http://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=" ## base fetch
searchURL <- paste(baseSearch,db, "&term=",gene,query,sep="")
getSearch <- getURL(searchURL)
searchHTML <- htmlTreeParse(searchURL, useInternalNodes =T)
nodes <- getNodeSet(searchHTML,"//querykey") ## this name "querykey" was extracted from the HTML source code for this page
querykey <- xmlToDataFrame(nodes)
nodes <- getNodeSet(searchHTML,"//webenv") ## this name "webenv" was extracted from the HTML source code for this page
webenv <- xmlToDataFrame(nodes)
fetchURL <- paste(baseFetch,db,"&query_key=",querykey,"&WebEnv=",webenv[[1]],"&rettype=docsum",sep="")
getFetch <- getURL(fetchURL)
fetchHTML <- htmlTreeParse(getFetch, useInternalNodes =T)
nodes <- getNodeSet(fetchHTML, "//position")
extractedDataAll <- xmlToDataFrame(nodes)
colnames(extractedDataAll) <- c("pathogenicSNPs")
print(extractedDataAll)
Please note, I found the query information by going to http://www.ncbi.nlm.nih.gov/clinvar/?term=BRCA1 selecting my filters (pathogenic, etc) and then clicking the advanced button. The most recent filters applied should come up in the main box, I used this for the query.
ClinVar now offers XML download of the whole database so webscraping is not necessary.