Using rvest to return descendants of a table - html

I am having trouble figuring out why the following code isn't returning the information specified by the xpath.
I am trying to select the count data found in the 'Core Questions' section of the page. I wanted to get it working for the table of the first question and then intended to extend it to do the same thing for each question/table on the page. Unfortunately I can get it to pull down the section of the table I am interested in. I imagine the answer involves specifying the children of the < tr > node I am interested in, i.e. multiple < td > tags, but my attempts to do this continue to fail. Would anyone be able to help me specify the part of the table I am interested in? (Bonus points if it can be done for all ten tables on the page!)
library(rvest)
detailed <- html("https://www.deakin.edu.au/evaluate/results/old/detail-rep.php?schedule_select=1301&faculty_select=01&school_select=0104&unit_select=MIS202&location_select=B")
q1 <- detailed %>%
html_nodes(xpath='//*[#id="main"]/div/div/form/fieldset[2]/table[1]/tbody/tr/td[2]/div/table/tbody/tr[5]') %>%
html_table(header = TRUE, fill=TRUE)
When I go to the ancestor table it pulls down the information but it is extremely messy and difficult to interpret. When I try to specify elements within this table I am unable to extract info. Is anyone able to explain to me why the descendants of table[1] are not being extracted? Here is the code to pull down table[1]:
q1 <- detailed %>%
html_nodes(xpath='//*[#id="main"]/div/div/form/fieldset[2]/table[1]') %>%
html_table(header = TRUE, fill = TRUE)

Does this get you where you need to be?
allqs <- detailed %>%
html_nodes(css = ".result center") %>%
html_text()
t(matrix(as.numeric(allqs), 5, 10, dimnames = list(c("Strongly Disagree", "Disagree", "Neutral", "Agree", "Strongly Agree"),
paste0("Q", 1:10))))
Which gives:
Strongly Disagree Disagree Neutral Agree Strongly Agree
Q1 0 4 4 9 1
Q2 1 2 2 11 2
Q3 0 0 2 11 5
Q4 1 3 2 9 3
Q5 0 3 4 10 1
Q6 0 1 5 7 2
Q7 0 3 6 6 3
Q8 1 0 2 7 8
Q9 0 0 5 7 5
Q10 0 1 4 7 5

Related

Add percent labels to a stacked bar graph of counts (no y variable in aes)

My data: each row is a participant (let's call it: pID) in my study. They all answered a question which could take response_values (Q_RV) of 1,2,3,4 or 5. Each participant is also labelled by health status (S) (1, 2, or 3).
data looks something like this:
#> # A tibble: 8 x 3
#> pID Q_RV S
#> <fct> <fct> <int>
#> 1 1 1
#> 2 1 1
#> 3 3 1
#> 4 3 2
#> 5 1 2
#> 6 2 1
#> 7 4 3
#> 8 5 1
I've made a stacked bar graph using counts of the each response value, and filled each bar by health status:
plot <- ggplot(data, aes (x=Q_RV, fill=S)) + [other stuff to make the plot look nice]
and I get this:
plot showing counts for each response value.
Now, I'd love to add a percent label above each bar that shows the percent of responses that had each value. In other words, over the far left bar, it should be roughly 75.5%
How do I do it? Every questions I've looked at uses a y argument in the aes....
Edit:
Found the answer here:
Adding percentage labels to a barplot with y-axis count

Post Increment date field in mySQL query using R

I am trying to query a table in our mySQL database using the DBI R package. However, I need to pull the fields from the table by changing the date field on a monthly basis and limiting it to 1.
I'm having trouble with the looping and sql query text. I would like to create a loop that changes the date (monthly) and then prints that to a database query that will then pull all the data that matches the monthly conditions.
This is my code so far:
for (i in seq(0,12,1)){
results <- dbGetQuery(myDB, paste("SELECT * FROM cost_and_price_period WHERE start_date <=", '01-[[i]]-2019'))
}
The main issue is that R doesn't acknowledge post-increment operators like ++, so I know I could just make 12 individual queries and then rbind them, but I would prefer to do one efficient query. Does anyone have any ideas?
This below solution could give you an idea how to proceed with your problem.
DummyTable
id names dob
1 1 aa 2018-01-01
2 2 bb 2018-02-01
3 3 cc 2018-03-01
4 4 dd 2018-04-01
5 5 ee 2018-05-01
6 6 ff 2018-06-01
7 7 gg 2018-07-01
8 8 hh 2018-08-01
9 9 ii 2018-09-01
10 10 jj 2018-10-01
11 11 kk 2018-11-01
12 12 ll 2018-12-01
13 13 ll 2018-12-01
Imagine we have the above table in MySQL. Then we need to access the data for 1st day of every month and store whole records as a data frame.
### Using for loop like from your question
n <- 12
df <- vector("list", n)
for (i in seq(1:12)){
df[[i]] <- data.frame(dbGetQuery(pool, paste0("SELECT * FROM dummyTable WHERE dob = '2018-",i,"-01';" ))) # in iteration `i` corresponds for month number
}
df <- do.call(rbind, df)
### Using lapply(preferred way)
n <- seq(1:12)
df <- lapply(n, function(x){
dbGetQuery(pool, paste0("SELECT * FROM dummyTable WHERE dob = '2018-",x,"-01';" ))
})
df <- do.call(rbind, df)
So output of df data frame will give the matched records from MySQL.

web scraping - No records found

I'm trying to rbind series of HTML Tables (from different pages with same col names) but some pages have "no records" , I want to skip such pages or assign NULL to the dataframe.
Example Dataframe 1
url="http://stats.espncricinfo.com/ci/engine/player/28081.html?class=2;filter=advanced;floodlit=1;innings_number=1;orderby=start;result=1;template=results;type=batting;view=match"
Batting=readHTMLTable(url)
Batting$"Match by match list"
Batting<-Batting$"Match by match list"
Dataframe 2
url="http://stats.espncricinfo.com/ci/engine/player/625383.html?class=2;filter=advanced;floodlit=1;innings_number=1;orderby=start;result=2;template=results;type=batting;view=match"
Batting=readHTMLTable(url)
Batting$"Match by match list"
Batting<-Batting$"Match by match list"
There are several such Dataframes which have records in tabular form and some that don't have records
When I rbind the one with no records is causing error for final dataframe
final_DF<-rbind(Dataframe1,Dataframe2)
How do I resolve this!?
PS: And for each url query I'm adding certain set of columns(say 5 additional columns using cbind) based on my requirement to the dataframe.
You can do the following:
require(rvest)
require(tidyverse)
urls <- c(
"http://stats.espncricinfo.com/ci/engine/player/28081.html?class=2;filter=advanced;floodlit=1;innings_number=1;orderby=start;result=1;template=results;type=batting;view=match",
"http://stats.espncricinfo.com/ci/engine/player/625383.html?class=2;filter=advanced;floodlit=1;innings_number=1;orderby=start;result=2;template=results;type=batting;view=match"
)
extra_cols <- list(
tibble("Team"="IND","Player"="MS.Dhoni","won"=1,"lost"=0,"D"=1,"D/N"=0,"innings"=1,"Format"="ODI"),
tibble("Team"="IND","Player"="MS.Dhoni","won"=1,"lost"=0,"D"=1,"D/N"=0,"innings"=1,"Format"="ODI")
)
doc <- map(urls, read_html) %>%
map(html_node, ".engineTable:nth-child(5)")
keep <- map_lgl(doc, ~class(.) != "xml_missing")
map(doc[keep], html_table, fill = TRUE) %>%
map2_df(extra_cols[keep], cbind)
The critical part is the discard which removes all list-elements of class "xml_missing" e.g. the empty ones.
I comparison to your code i use CSS selector to specify the html_node that should inherit the table. See http://selectorgadget.com/
Also your rbind is done internally by map2_df (the last row)
This results in: (using %>% {head(.[,c("Bat1", "Runs", "Team")])})
Bat1 Runs Team
1 0 0 IND
2 3 3 IND
3 148 148 IND
4 56 56 IND
5 38 38 IND
6 20 20 IND

Count number of rows when using dplyr to access sql table/query

What would be the efficient way to count the number of rows which using dplyr to access sql table. MWE is below using SQLite, but I use PostgreSQL and have the same issue. Basically dim() is not very consistent. I used
dim()
This works for a schema in the database (First case), but is not very consistent when I create a tbl from an SQL query for the same schema (Second case). My number of rows is in the millions or I see this even with a small 1000 of rows. I get NA or ??. Is there anything that is missing?
#MWE
test_db <- src_sqlite("test_db.sqlite3", create = T)
library(nycflights13)
flights_sqlite <- copy_to(test_db, flights, temporary = FALSE, indexes = list(
c("year", "month", "day"), "carrier", "tailnum"))
flights_postgres <- tbl(test_db, "flights")
First case (table from direct schema)
flights_postgres
> flights_postgres
Source: postgres 9.3.5 []
From: flights [336,776 x 16]
year month day dep_time dep_delay arr_time arr_delay carrier tailnum flight origin dest air_time distance hour minute
1 2013 1 1 517 2 830 11 UA N14228 1545 EWR IAH 227 1400 5 17
2 2013 1 1 533 4 850 20 UA N24211 1714 LGA IAH 227 1416 5 33
#using dim()
> dim(flights_postgres)
[1] 336776 16
The above works and get the count of the number of rows.
Second case (table from SQL query)
## use the flights schema above but can also be used to create other variables (like lag, lead) in run time
flight_postgres_2 <- tbl(test_db, sql("SELECT * FROM flights"))
>flight_postgres_2
Source: postgres 9.3.5 []
From: <derived table> [?? x 16]
year month day dep_time dep_delay arr_time arr_delay carrier tailnum flight origin dest air_time distance hour minute
1 2013 1 1 517 2 830 11 UA N14228 1545 EWR IAH 227 1400 5 17
2 2013 1 1 533 4 850 20 UA N24211 1714 LGA IAH 227 1416 5 33
>
> dim(flight_postgres_2)
[1] NA 16
As you see it either prints as ?? or NA. So not very helpful.
I got around this by either using collect() or then convert the output to a dataframe using as.data.frame() to check the dimension. But these two methods may not be the ideal solution, given the time it may take for larger number of rows.
I think the answer is what #alistaire suggests: Do it in the database.
> flight_postgres_2 %>% summarize(n())
Source: sqlite 3.8.6 [test_db.sqlite3]
From: <derived table> [?? x 1]
n()
(int)
1 336776
.. ...
Asking dim to do this would be having your cake (lazy evaluation of SQL with dplyr, keeping data in the database) and eating it too (having full access to the data in R).
Note that this is doing #alistaire's approach underneath:
> flight_postgres_2 %>% summarize(n()) %>% explain()
<SQL>
SELECT "n()"
FROM (SELECT COUNT() AS "n()"
FROM (SELECT * FROM flights) AS "zzz11") AS "zzz13"
<PLAN>
selectid order from detail
1 0 0 0 SCAN TABLE flights USING COVERING INDEX flights_year_month_day

Extracting data from data frame in R

I am very new to R (computer programming in general) and am working on a bioinformatics project. I made a MySQL database and using RMySQL connected to that database in the MySQL server in R. From here I issued queries to select a certain field from a table, fetch this data and make it into a data frame in R as seen below:
> rs = dbSendQuery(con, "select mastitis_no from experiment")
> data = fetch(rs, n=-1)
> data
mastitis_no
1 5
2 2
3 8
4 6
5 2
....
> rt = dbSendQuery(con, "select BMSCC from experiment")
> datas = fetch(rt, n=-1)
> datas
BMSCC
1 14536
2 10667
3 23455
4 17658
5 14999
....
> ru = dbSendQuery(con, "select cattle_hygiene_score_avg from experiment")
> dat = fetch(ru, n=-1)
> dat
cattle_hygiene_score_avg
1 1.89
2 1.01
3 1.21
4 1.22
5 1.93
....
My first 2 data frames are integers and my third data frame is in decimal format. I am able to run a simple correlation test on these data frames, but a detailed test (or plot) cannot be run as seen below.
> cor(data, datas)
BMSCC
mastitis_no 0.8303017
> cor.test(data, datas)
Error in cor.test.default(data, datas) : 'x' must be a numeric vector
Therefore I accessed the data inside those data frames using the usual list idexing device $, however the decimal data frame did not work as noted below.
> data$mastitis
[1] 5 2 8 6 2 0 5 6 7 3 0 1 0 3 2 2 0 5 2 1
> datas$BMSCC
[1] 14536 10667 23455 17658 14999 5789 18234 22390 19069 13677 13536 11667 13455
[14] 17678 14099 15789 8234 21390 16069 13597
> dat$hygiene
NULL
by doing this I am able to perform a spearman rank correlation test and scatter plot on the first two data frames but not the decimal data frame. Any suggestion on what I need to do? I am sure the answer is quite simple but I cannot find the coding necessary for this simple task. Any help would be much appreciated.