How to correctly identify html node - html

I want to scrape the price of a product on a webshop, but I struggle to correctly allocate the correct nodes to the price i want to scrape.
The relevant part of my code looks like this:
"https://www.surfdeal.ch/produkt/2019-aqua-marina-fusion-orange/"%>%
read_html()%>%
html_nodes('span.woocommerce-Price-amount.amount')%>%
html_text()
When executing this code, I do get prices as a result, but not the ones i want (it shows the prices of other produts that are listed beneath.
How can I now correctly identify the node to the price of the product itself (375.-)

First: I don't know R.
This page uses JavaScript to add this price in HTML
but I don't know if rvest can run JavaScript.
But I found this value in <form data-product_variations="..."> as JSON
and I could display prices for all options:
data <- "https://www.surfdeal.ch/produkt/2019-aqua-marina-fusion-orange/" %>%
read_html() %>%
html_nodes('form.variations_form.cart') %>%
html_attr('data-product_variations') %>%
fromJSON
data$display_price
data$regular_price
data$image$title
Result:
> data$display_price
[1] 479 375 439 479 479
> data$display_regular_price
[1] 699 549 629 699 699
> data$image$title
[1] "aqua marina fusion bamboo padddel"
[2] "aqua marina fusion aluminium padddel"
[3] "aqua marina fusion carbon padddel"
[4] "aqua marina fusion hibi padddel"
[5] "aqua marina fusion silver padddel"
> colnames(data)
[1] "attributes" "availability_html" "backorders_allowed"
[4] "dimensions" "dimensions_html" "display_price"
[7] "display_regular_price" "image" "image_id"
[10] "is_downloadable" "is_in_stock" "is_purchasable"
[13] "is_sold_individually" "is_virtual" "max_qty"
[16] "min_qty" "price_html" "sku"
[19] "variation_description" "variation_id" "variation_is_active"
[22] "variation_is_visible" "weight" "weight_html"
[25] "is_bookable" "number_of_dates" "your_discount"
[28] "gtin" "your_delivery"
EDIT:
To work with page which uses JavaScript you may need other tools - like phantomjs
How to Scrape Data from a JavaScript Website with R | R-bloggers

Related

How to scrape span info using Rvest in R

Usually when scraping websites, I use "SelectorGadget". If not, I would have to inspect some elements on a page.
However, I am running in to a bit of trouble when trying to scrape this one website.
The HTML looks like this:
<div class="col-span-2 mt-16 sm:mt-4 flex justify-between sm:block space-x-12 font-bold"><span>103 m²</span><span>8 650 000 kr</span></div>
Elements that I want:
<span>103 m²</span>
</span><span>8 650 000 kr</span></div>
They look like this:
103 m²
8 650 000 kr
My simple R code:
# The URL
url = "https://www.finn.no/realestate/homes/search.html?page=%d&sort=PUBLISHED_DESC"
page_outside <- read_html(sprintf(url,1))
element_1 <- page %>% html_nodes("x") %>% html_text()
Anyone got any tips or ideas on how I can access these?
thanks!
Here is a possibility, parse out span nodes under a div with class of "justify-between".
url = "https://www.finn.no/realestate/homes/search.html?page=%d&sort=PUBLISHED_DESC"
page_outside <- read_html(sprintf(url,1))
element_1 <- page_outside %>% html_elements("div.justify-between span")
element_1
{xml_nodeset (100)}
[1] <span>47 m²</span>
[2] <span>3 250 000 kr</span>
[3] <span>102 m²</span>
[4] <span>2 400 000 kr</span>
[5] <span>100 m²</span>
[6] <span>10 000 000 kr</span>
[7] <span>122 m²</span>
[8] <span>9 950 000 kr</span>
[9] <span>90 m²</span>
[10] <span>4 790 000 kr</span>
...
Update
If the is some missing data then a slightly longer solution is need to track which element is missing
divs <- page_outside %>% html_elements("div.justify-between")
answer <-lapply(divs, function(node) {
values <- node %>% html_elements("span") %>% html_text()
if (length(values)==2)
{
results <- t(values)
} else if (grepl("kr", values) ) {
results <- c(NA, values)
} else {
results <- c(values, NA)
}
results
})
answer <- do.call(rbind, answer)
answer
[,1] [,2]
[1,] "87 m²" "2 790 000 kr"
[2,] "124 m²" "5 450 000 kr"
[3,] "105 m²" "4 500 000 kr"
[4,] "134 m²" "1 500 000 kr"

web-scraping: how to include quote character in HTML node

I am using the rvest package to scrape information from a website. Some information that I need belong to the class iinfo". Unfortunately, if I use this string inside the function html_nodes() I got the following error:
Error in parse_simple_selector(stream) :
Expected selector, got <STRING '' at 7>
Here's a reprex:
library(rvest)
library(xml2)
webpage <- read_html(x = paste0("https://www.gstsvs.ch/fr/trouver-un-medecin-veterinaire.html?tx_datapool_pi1%5Bhauptgebiet%5D=3&tx_datapool_pi1%5Bmapsearch%5D=cercare&tx_datapool_pi1%5BdoSearch%5D=1&tx_datapool_pi1%5Bpointer2303%5D=",
0))
webpage_address <- webpage %>%
html_nodes('.iinfo"') %>%
html_text() %>%
gsub(pattern = "\r|\t|\n",
replacement = " ")
That class refers to the addresses listed inside every box of the website. You can retrieve this information if, in the browser, you inspect the webpage structure and navigate to that box. If you do so, when you select the address division with the mouse, you'll see that a flag with div.iinfo\" appears.
Thanks a lot for your help!
Here:
webpage_address <- webpage %>%
html_nodes(xpath = "//*[#class='iinfo\"']") %>%
html_text(trim = T)
Result:
> webpage_address
[1] "Anne-Françoise HenchozEnvers 412400 Le Locle, NE"
[2] "Téléphone: 032 931 10 10Urgences: 032 931 10 10Fax: 032 931 36 10afhenchoz(at)bluewin.chafhenchoz.com"
[3] "Ursi Dommann ScheuberHauptstrasse 156222 Gunzwil, LU"
[4] "Téléphone: 041 930 14 44tiergesundheit(at)bluewin.ch"
[5] "Dr. Med. Vet. Anne KramerBaggwilgraben 33267 Seedorf, BE"
[6] "Téléphone: 079 154 70 15anne(at)alpakavet.chwww.alpakavet.ch"
[7] "Dr. med. vet. Andrea FeistAdelbodenstrasse 103714 Frutigen, BE"
[8] "Téléphone: 033 671 15 60Urgences: 033 671 15 60Fax: 033 671 86 60alpinvet(at)bluewin.chwww.alpinvet.ch"
[9] "Dr. med. vet. Peter KürsteinerAlpsteinstr. 289240 Uzwil, SG"
[10] "Téléphone: 071 951 85 44"
[11] "Kathrin Urscheler-Hollenstein, Eveline Muhl-ZollingerSchaffhauserstrasse 2458222 Beringen, SH"
[12] "Téléphone: 052 685 20 20Fax: 052 685 34 20praxis(at)tieraerzte-team.chwww.tieraerzte-team.ch"
[13] "Dr. med. vet. Erwin VincenzVia Santeri 127130 Ilanz, GR"
[14] "Téléphone: 081/925 23 23Urgences: 081/925 23 23Fax: 081/925 23 25info(at)anima-veterinari.ch"
[15] "Dr. Zlatko MarinovicMühlerain 3853855072 oeschgen, AG"
[16] "Téléphone: 49628715060Urgences: 49628715060Fax: 49628712439z.marin(at)sunrise.ch"
[17] "Manser ChläusSchwalbenweg 73186 Düdingen, FR"
[18] "Téléphone: 026 493 10 60animans.tierarzt(at)gmail.com"
[19] "W.A.GeesBrünigstrasse 38aHauptstrasse 100, 3855 Brienz3860 Meiringen, BE"
[20] "Téléphone: 033 / 971 60 42Urgences: 033 / 971 60 42Fax: 033 / 971 01 50info(at)tierarzt-meiringen.chanisano.ch"

How To Extract Name in this HTML Element using rvest

I've searched through many rvest scraping posts but can't find an example like mine. I'm following the R vignette example (https://blog.rstudio.com/2014/11/24/rvest-easy-web-scraping-with-r/) for selectorgadget, but inputting my use case as necessary. None of selector gadget's suggestions get me what I need. I need to extract the name for each review on the page. A sample of what the name looks like under the hood is as follows:
<span itemprop="name" class="sg_selected">This Name</span>
Here's my code to this point. Ideally, this code should get me the individual names on this web page.
library(rvest)
library(dplyr)
dsa_reviews <-
read_html("https://www.directsalesaid.com/companies/traveling-
vineyard#reviews")
review_names <- html_nodes(dsa_reviews,'#reviews span')
df <- bind_rows(lapply(xml_attrs(review_names), function(x)
data.frame(as.list(x), stringsAsFactors=FALSE)))
Apologies if this is a duplicate question or if it's not formatted correctly. Please feel free to request any necessary edits.
Here it is :
library(rvest)
library(dplyr)
dsa_reviews <-
read_html("https://www.directsalesaid.com/companies/traveling-vineyard#reviews")
html_nodes(dsa_reviews,'[itemprop=name]') %>%
html_text()
[1] "Traveling Vineyard" ""
[3] "Kiersten Ray-kuhn" "Miley Sama"
[5] " Nancy Shawtone " "Amanda Moore"
[7] "Matt" "Kathy Barzal"
[9] "Lesa Brinker" "Lori Stryker"
[11] "Jeanette Holtman" "Penny Notarnicola"
[13] "Laura Ann" "Nicole Lafave"
[15] "Gretchen Hess Miller" "Gina Devine"
[17] "Ashley Lawton Converse" "Morgan Williams"
[19] "Angela Baston Mckeone" "Traci Feshler"
[21] "Kisha Marshall Dlugos" "Jody Cole Dvorak"
Colin

how to just retrieve the titles from the query result using rvest

I use rvest to retrieve the titles from google query result. My code is like this:
> url = URLencode(paste0("https://www.google.com.au/search?q=","600d"))
> page <- read_html(url)
> page %>%
html_nodes("a") %>%
html_text()
However, the result includes not only just titles, but also something else, like:
[24] "Past month"
[25] "Past year"
[26] "Verbatim"
[27] "EOS 600D - Canon"
[28] "Similar"
[29] "Canon 600D | BIG W"
[30] "Cached"
[31] "Similar"
......
[45] ""
[46] ""
where what I need are [27] "EOS 600D - Canon" and [29] "Canon 600D | BIG W". They are shown in google query like this:
All of others are just noises for me. Could anyone please tell me how to get rid of those?
Also, if I want the description part as well, what I should do?
To just get the titles, do not use <a> (=link) but <h3>:
page %>%
html_nodes("h3") %>%
html_text()
[1] "EOS 600D - Canon"
[2] "Canon EOS 600D - Wikipedia"
[3] "Canon 600D | BIG W"
[4] "Canon EOS 600D Digital SLR Camera with 18-55mm IS Lens Kit ..."
[5] "Canon Rebel T3i / EOS 600D Review: Digital Photography Review"
[6] "Canon EOS 600D review - CNET"
[7] "canon eos 600d | Cameras | Gumtree Australia Free Local Classifieds"
[8] "Images for 600d"
[9] "Canon 600D - Snapsort"
[10] "Canon EOS 600D - Georges Cameras"

Read HTML Table from Greyhound via R

I'm trying to read the HTML data regarding Greyhound bus timings. An example can be found here. I'm mainly concerned with getting the schedule and status data off the table, but when I execute the following code:
library(XML)
url<-"http://bustracker.greyhound.com/routes/4511/I/Chicago_Amtrak_IL-Cincinnati_OH/4511/10-26-2016"
greyhound<-readHTMLTable(url)
greyhound<-greyhound[[2]]
This just produces the following table:
I'm not sure why it's grabbing data that's not even on the page, as opposed to the
you can not retrieve the data using readHTMLTable because the traject result are sent as javascript script. So you should select that script and parse it to extract the right information.
Her a solution , that do this :
Extract the javascript script that contain the json data
extract the json data from the script using regular expression
parse the json data to an R list
Reshape the resulted list into a table ( data.table here)
The code looks maybe short but it is really compact ( it takes me an hour to do produce it)!
library(XML)
library(httr)
library(jsonlite)
library(data.table)
dc <- htmlParse(GET(url))
script <- xpathSApply(dc,"//script/text()",xmlValue)[[5]]
res <- strsplit(script,"stopArray.push({",fixed=TRUE)[[1]][-1]
dcast(point~name,data=rbindlist(Map(function(x,y){
x <- paste('{',sub(');|);.*docum.*',"",x))
dx <- unlist(fromJSON(x))
data.frame(point=y,name=names(dx),value=dx)
},res,seq_along(res))
,fill=TRUE)[name!="polyline"])
the table result :
point category direction id lat linkName lon
1: 1 2 empty 562310 41.878589630127 Chicago_Amtrak_IL -87.6398544311523
2: 2 2 empty 560252 41.8748474121094 Chicago_IL -87.6435165405273
3: 3 1 empty 561627 41.7223281860352 Chicago_95th_&_Dan_Ryan_IL -87.6247329711914
4: 4 2 empty 260337 41.6039199829102 Gary_IN -87.3386917114258
5: 5 1 empty 260447 40.4209785461426 Lafayette_e_IN -86.8942031860352
6: 6 2 empty 260392 39.7617835998535 Indianapolis_IN -86.161018371582
7: 7 2 empty 250305 39.1079406738281 Cincinnati_OH -84.5041427612305
name shortName ticketName
1: Chicago Amtrak: 225 S Canal St, IL 60606 Chicago Amtrak, IL CHD
2: Chicago: 630 W Harrison St, IL 60607 Chicago, IL CHD
3: Chicago 95th & Dan Ryan: 14 W 95th St, IL 60628 Chicago 95th & Dan Ryan, IL CHD
4: Gary: 100 W 4th Ave, IN 46402 Gary, IN GRY
5: Lafayette (e): 401 N 3rd St, IN 47901 Lafayette (e), IN XIN
6: Indianapolis: 350 S Illinois St, IN 46225 Indianapolis, IN IND
7: Cincinnati: 1005 Gilbert Ave, OH 45202 Cincinnati, OH CIN
As #agstudy notes, the data is rendered to HTML; it's not delivered via HTML directly from the server. Therefore, you can (a) use something like RSelenium to scrape the rendered content, or (b) extract the data from the <script> tags that contain the data.
To explain #agstudy's work, we observe that the data is contained in a series of stopArray.push() commands in one of the (many) script tags. For example:
stopArray.push({
"id" : "562310",
"name" : "Chicago Amtrak: 225 S Canal St, IL 60606",
"shortName" : "Chicago Amtrak, IL",
"ticketName" : "CHD",
"category" : 2,
"linkName" : "Chicago_Amtrak_IL",
"direction" : "empty",
"lat" : 41.87858963012695,
"lon" : -87.63985443115234,
"polyline" : "elr~Fnb|uOmC##nG?XBdH#rC?f#?P?V#`AlAAn#A`CCzBC~BE|CEdCA^Ap#A"
});
Now, this is json data contained inside each function call. I tend to think that if someone has gone to the work of formatting data in a machine-readable format, well golly we should appreciate it!
The tidyverse approach to this problem is as follows:
Download the page using the rvest package.
Identify the appropriate script tag to use by employing an xpath expression that searches for all script tags that contain the string url =.
Use a regular expression to pull out everything inside each stopArray.push() call.
Fix the formatting of the resulting object by (a) separating each block with commas, (b) surrounding the string by [] to indicate a json list.
Use jsonlite::fromJSON to convert into a data.frame.
Note that I hide the polyline column near the end, since it's too large to previous appropriately.
library(tidyverse)
library(rvest)
library(stringr)
library(jsonlite)
url <- "http://bustracker.greyhound.com/routes/4511/I/Chicago_Amtrak_IL-Cincinnati_OH/4511/10-26-2016"
page <- read_html(url)
page %>%
html_nodes(xpath = '//script[contains(text(), "url = ")]') %>%
html_text() %>%
str_extract_all(regex("(?<=stopArray.push\\().+?(?=\\);)", multiline = T, dotall = T), F) %>%
unlist() %>%
paste(collapse = ",") %>%
sprintf("[%s]", .) %>%
fromJSON() %>%
select(-polyline) %>%
head()
#> id name
#> 1 562310 Chicago Amtrak: 225 S Canal St, IL 60606
#> 2 560252 Chicago: 630 W Harrison St, IL 60607
#> 3 561627 Chicago 95th & Dan Ryan: 14 W 95th St, IL 60628
#> 4 260337 Gary: 100 W 4th Ave, IN 46402
#> 5 260447 Lafayette (e): 401 N 3rd St, IN 47901
#> 6 260392 Indianapolis: 350 S Illinois St, IN 46225
#> shortName ticketName category
#> 1 Chicago Amtrak, IL CHD 2
#> 2 Chicago, IL CHD 2
#> 3 Chicago 95th & Dan Ryan, IL CHD 1
#> 4 Gary, IN GRY 2
#> 5 Lafayette (e), IN XIN 1
#> 6 Indianapolis, IN IND 2
#> linkName direction lat lon
#> 1 Chicago_Amtrak_IL empty 41.87859 -87.63985
#> 2 Chicago_IL empty 41.87485 -87.64352
#> 3 Chicago_95th_&_Dan_Ryan_IL empty 41.72233 -87.62473
#> 4 Gary_IN empty 41.60392 -87.33869
#> 5 Lafayette_e_IN empty 40.42098 -86.89420
#> 6 Indianapolis_IN empty 39.76178 -86.16102