Chrome preview differs from download - html

I inspect the following page:
https://www.dm-jobs.com/Germany/search/?searchby=location&createNewAlert=false&q=&locationsearch=&geolocation=&optionsFacetsDD_customfield4=&optionsFacetsDD_customfield3=&optionsFacetsDD_customfield2=
or
https://www.dm-jobs.com/Germany/search/?q=&sortColumn=referencedate&sortDirection=desc&searchby=location&d=15.
As far as i understood the data can be either get via a get/post, in the "raw" html source or that some JavaScript code is executed.
But on that page i somehow dont manage to find the source.
The data on Chrome Network indicates that the data (here the Job data on the page) are in a Doc(ument) [see the screenshot - Tab Doc] and when i look on the preview tab its empty. But if i look on the "Response" tab the data can be seen.
Desired Output:
Target langauge is R, but actually not that relevant here. I would be happy enough to understand how the data is generated. So some selenium Approach or similar is not desired. But more getting an understanding how the data is generated and how it could be extracted via post/get, JS or the raw source.
What i tried:
library(httr)
library(rvest)
url <- "https://www.dm-jobs.com/Germany/search/?searchby=location&createNewAlert=false&q=&locationsearch=&geolocation=&optionsFacetsDD_customfield4=&optionsFacetsDD_customfield3=&optionsFacetsDD_customfield2="
src <- read_html(url)
src %>% html_nodes(xpath = "//*[contains(text(), 'Filialmitarbeiter')]")
as.character(src) %>% grep(pattern = "Filialmitarbeiter")
get <- GET(url)
content(get)
content(get$content)
Target Outputs:
e.g.
Filialmitarbeiter (w/m/d) 15-30 Std./Wo. Bad Reichenhall, DE, 83435 30.08.2019
Filialmitarbeiter (w/m/d) 6-8 Std./Wo. Neuenburg am Rhein, DE, 79395 30.08.2019
Führungsnachwuchs Filialleitung (w/m/d) Vechta, DE, 49377 30.08.2019

There are two cookies that are of import that must be picked up from the initial landing page. You can use html_session to capture these dynamically and then pass them on in a subsequent request to the page you want results from (at least for me). I wrote some stuff about session objects here.
The 3 cookies seen are:
cookies = c(
'rmk12' = '1',
'JSESSIONID' = 'some_value',
'cookie_j2w' = 'some_other_value'
)
You can find these plus the headers by using the network tab to monitor the web-traffic when attempting to view the job listings.
You can experiment with removing headers and cookies and you will discover that only the second and third cookies are required and no headers. However, the cookies passed must be captured in a prior request to the url as shown below. Session is the traditional way to do this.
R
library(rvest)
library(magrittr)
start_link = 'https://www.dm-jobs.com/Germany/?locale=de_DE'
next_link <- 'https://www.dm-jobs.com/Germany/search/?searchby=location&createNewAlert=false&q=&locationsearch=&geolocation=&optionsFacetsDD_customfield4=&optionsFacetsDD_customfield3=&optionsFacetsDD_customfield2='
jobs <- html_session(start_link) %>%
jump_to(.,next_link) %>%
html_nodes('.jobTitle-link') %>%
html_text()
print(jobs)
Py
import requests
from bs4 import BeautifulSoup as bs
with requests.Session() as s:
r = s.get('https://www.dm-jobs.com/Germany/?locale=de_DE')
cookies = s.cookies.get_dict() # just to demo which cookies are captured
print(cookies) # just to demo which cookies are captured
r = s.get('https://www.dm-jobs.com/Germany/search/?searchby=location&createNewAlert=false&q=&locationsearch=&geolocation=&optionsFacetsDD_customfield4=&optionsFacetsDD_customfield3=&optionsFacetsDD_customfield2=')
soup = bs(r.content, 'lxml')
print(len(soup.select('.jobTitle-link')))
Reading:
html_session

Related

rvest emails scraping on page with complex node structure (html nodes)

I am trying to collect names and email addresses from this page "https://www.gu.se/en/about/find-staff?affiliation_types=Teaching%20staff&hits=2744". I am having hard times to figure out the correct way to select the nodes. For example I am doing the following to select people names, but it selects the wrong node.
Thank you in advance for your help
library(rvest)
library(tidyverse)
r<-read_html("https://www.gu.se/en/about/find-staff?affiliation_types=Teaching%20staff&hits=2744")
people_name <- r %>%
html_nodes("a span") %>%
html_text()
As #QHarr mentioned in the comments, the data in the webpage is generated dynamically. The html code you get in your read_html does not yet have the data you need. You could use RSelenium, but in this case I think rvest is better.
If you look at the Developer Tools in chrome (see image below), you will see that when you load the webpage, it makes several subsequent requests. One of them is to the url #QHarr mentioned that returns a json string with all the data the then populates the website using javascript.
So, you can make a request directly to this url, get the json string and parse the
json string so you can get the data directly (this is much lighter than using RSelenium). Sometimes this does not work, because you may need to set state variables in the request to the server or make a complicated POST request. But in this case it is a simpler GET request and it worked!
The json response is a nested list so you need to look at it and identify where is the data you need for each person.
Here is my code:
library(rvest)
library(dplyr)
url.1 <- 'https://www.gu.se/api/search/rest/apps/gu/searchers/person_en?q=*&sort=relevance&affiliation_types=Teaching+staff&hits=2744'
# get json string and parse it to list using jsonlite::fromJSON
json.content <- read_html(url.1) %>% html_node('body') %>% html_text() %>%
jsonlite::fromJSON(simplifyVector = FALSE)
# the list of people is in json.content$documentList$documents (also a nested list)
# use plyr::ldply to get info from each person and combine into a dataframe
df.staff <- plyr::ldply(json.content$documentList$documents,
.fun = {function(x){
name = x$title
aff = ifelse(length(x$affiliations[[1]]$affiliation_name) > 0,
x$affiliations[[1]]$affiliation_name,
NA)
email = ifelse(length(x$affiliations[[1]]$email[[1]]) > 0,
x$affiliations[[1]]$email[[1]],
NA)
dept = ifelse(length(x$affiliations[[1]]$organization) > 0,
x$affiliations[[1]]$organization,
NA)
data.frame(name=name,
affiliation=aff,
email=email,
department=dept)}})
head(df.staff)
# name affiliation email department
#1 Zareen Abbas SENIOR LECTURER zareen.abbas#gu.se Department of Chemistry & Molecular Biology
#2 Yehia Abd Alrahman POSTDOCTOR yehia.abd.alrahman#gu.se Formal Methods
#3 Afrah Abdulla SENIOR LECTURER afrah.abdulla#gu.se Unit for General Didactics and Pedagogic Work
#4 Behjat Omer Abdulla LECTURER behjat.o.a#akademinvaland.gu.se The Crafts and Fine Art Unit
#5 Frida Abel Docent frida.abel#gu.se Department of Laboratory Medicine
#6 Andreas Martin Abel SENIOR LECTURER abela#chalmers.se Computer Science (CS)

rvest - find html-node with last page number

I'm learning web scraping and created a little exercise for myself to scrape all titles of a recipe site: https://pinchofyum.com/recipes?fwp_paged=1. (I got inspired by this post: https://www.kdnuggets.com/2017/06/web-scraping-r-online-food-blogs.html).
I want to scrape the value of the last page number, which is (at time of writing) number 64. You can find the number of pages at the bottom. I see that this is stored as "a.facetwp-page last", but for some reason cannot access this node. I can see that the page number values are stored as 'data-page', but I'm unable to get this value through 'html_attrs'.
I believe the parent node is "div.facetwp-pager" and I can access that one as follows:
library(rvest)
pg <- read_html("https://pinchofyum.com/recipes")
html_nodes(pg, "div.facetwp-pager")
But this is as far as I get. I guess I'm missing something small, but cannot figure out what it is. I know about Rselenium, but I would like to know if and how to get that last page value (64) with rvest.
Sometimes scraping with rvest doesn't work, especially when the webpage is dynamically generated with java script (I also wasn't able to scrape this info with rvest). In those cases, you can use the RSelenium package. I was able to scrape your desired element like this:
library(RSelenium)
rD <- rsDriver(browser = c("firefox")) #specify browser type you want Selenium to open
remDr <- rD$client
remDr$navigate("https://pinchofyum.com/recipes?fwp_paged=1") # navigates to webpage
webElem <- remDr$findElement(using = "css selector", ".last") #find desired element
txt <- webElem$getElementText() # gets us the HTML
#> txt
#>[[1]]
#>[1] "64"

How do I webscrape .dpbox table using selectorgadget with R (rvest)?

I've been trying to webscrape data from a specific website using selectorgadget in R. For example, I successfully webscraped from http://www.dotabuff.com/heroes/abaddon/matchups before. Usually, I just click on the tables I want using the selectorgadget Chrome extension and put the CSS Selection result into the code as follows.
urlx <- "http://www.dotabuff.com/heroes/abaddon/matchups"
rawData <- html_text(html_nodes(read_html(urlx),"td:nth-child(4) , td:nth-child(3), .cell-xlarge"))
In this case, the html_nodes function does return a whole bunch of nodes (340)
{xml_nodeset (340)}
However, when I try to webscrape off http://www.dotapicker.com/heroes/Abaddon using selectorgadget, which turns out to be this code:
urlx <- "http://www.dotapicker.com/heroes/abaddon"
rawData <- html_text(html_nodes(read_html(urlx),".ng-scope:nth-child(1) .ng-scope .ng-binding"))
Unfortunately, no nodes actually show up after the html_nodes function is called, and I get the result
{xml_nodeset (0)}
I feel like this has something to do with the nesting of the table in a drop down box (compared to previously, the table was right on the webpage itself) but I'm not sure how to get around it.
Thank you and I appreciate any help!
It seems like this page load dynamically some data using XHR. In Chrome you can check that by going to inspect and then the network tab. If you do this, you will see that there are a number of json files that are being loaded. You can scrape directly those json files and then parse them to extract the info you need. Here is a quick example:
library(httr)
library(jsonlite)
heroinfo_json <- GET("http://www.dotapicker.com/assets/json/data/heroinfo.json")
heroinfo_flat <- fromJSON(content(heroinfo_json, type = "text"))
#> No encoding supplied: defaulting to UTF-8.
winrates_json <- GET("http://www.dotapicker.com/assets/dynamic/winrates10d.json")
winrates_flat <- fromJSON(content(winrates_json, type = "text"))
#> No encoding supplied: defaulting to UTF-8.

Read all html tables from tennis players activity page

I would like to read all html tables containing Federer's results from this website: http://www.atpworldtour.com/en/players/roger-federer/f324/player-activity
and store the data in one single data frame. One way I figured out was using the rvest package, but as you may notice, my code only works for a specific number of tournaments. Is there any way I can read all relevant tables with one command? Thank you for your help!
Url <- "http://www.atpworldtour.com/en/players/roger-federer/f324/player-activity"
x<- list(length(4))
for (i in 1:4) {
results <- Url %>%
read_html() %>%
html_nodes(xpath=paste0("//table[#class='mega-table'][", i, "]")) %>%
html_table()
results <- results[[1]]
x[[i]] <- resultados
}
Your solution above was close to being the final solution. One downside of your code was having the read_html statement inside the for loop, this would greatly slow down the processing. In the future read the page into a variable and then process the page node by node as necessary.
In this solution, I read the web page into the variable "page" and then extracted the table nodes where class = mega-table. One there, the html_table command returned a list of the tables of interest. The do.call looped a rbind the tables together.
library(rvest)
url <- "http://www.atpworldtour.com/en/players/roger-federer/f324/player-activity"
page<- read_html(url)
tablenodes<-html_nodes(page, "table.mega-table")
tables<-html_table(tablenodes)
#numoftables<-length(tables)
df<-do.call(rbind, tables)

R - Extracting Tables From Websites Using XML Package

I am trying to replicate the method used in a previous answer here Scraping html tables into R data frames using the XML package for my own work but cannot get the data to extract. The website I am using is:
http://www.footballfanalytics.com/articles/football/euro_super_league_table.html
I just wish to extract a table of each team name and their current rating score. My code is as follows:
library(XML)
theurl <- "http://www.footballfanalytics.com/articles/football/euro_super_league_table.html"
tables <- readHTMLTable(theurl)
n.rows <- unlist(lapply(tables, function(t) dim(t)[1]))
tables[[which.max(n.rows)]]
This produces the error message
Error in tables[[which.max(n.rows)]] :
attempt to select less than one element
Could anyone suggest a solution please? Is there something in this particular site causing this not to work? Or is there a better alternative method I can try? Thanks
Seems as if the data is loaded via javascript. Try:
library(XML)
theurl <- "http://www.footballfanalytics.com/xml/esl/esl.xml"
doc <- xmlParse(theurl)
cbind(team = xpathSApply(doc, "/StatsData/Teams/Team/Name", xmlValue),
points = xpathSApply(doc, "/StatsData/Teams/Team/Points", xmlValue))