Yahoo Finance Headlines webpage scraping with R - html

I would like to use R to download the HTML code of any Yahoo Finance Headlines webpage, select the "headlines" and collect them in Excel. Unfortunately I cannot find and select the HTML nodes corresponding to the headlines once I download the source file to R.
Let me show the problem with an example.
I started with
source <- "http://finance.yahoo.com/q/h?s=AAPL+Headlines"
file <- "destination/finance_file.cvs"
download.file(url = source, destfile = file)
x = scan(file, what = "", sep = "\n")
producing the Excel file finance_file.cvs and, most importantly, the character x.
Using x I would like to collect the headlines and write them into a column in a second Excel file, called headlines.cvs.
My problem now is the following: if I select any headline I can find it in the HTML code of the webpage itself, but I lose its track in x. Therefore, I do not know how to extract it.
For the extraction I was thinking of
x = x[grep("some string of characters to do the job", x)]
but I am no expert in web scraping.
Any ideas/suggestions?
I thank you very much!

You can use the XML package and write the XPath query needed to extract the headlines.
Since the web page looks like:
...
<ul class="newsheadlines"/>
<ul>
<li>First headline</li>
...
you get the following query.
library(XML)
source <- "http://finance.yahoo.com/q/h?s=AAPL+Headlines"
d <- htmlParse(source)
xpathSApply(d, "//ul[contains(#class,'newsheadlines')]/following::ul/li/a", xmlValue)
free(d)

Related

Scraping html header with R

My objective
I'm attempting to use R to scrape text from a web page: https://tidesandcurrents.noaa.gov/stationhome.html?id=8467150. For the purposes of this question, my goal is to access the header text that contains the station number ("Bridgeport, CT - Station ID: 8467150"). Below is a screenshot of the page. I've highlighted the text that I'm trying to verify is present, and the text is also highlighted in the inspect element pane.
My old approach was to access the full text of the site with readLines(). A recent update to the website has made the text more difficult to access, and the station name/number is no longer visible to readLines():
url <- "https://tidesandcurrents.noaa.gov/stationhome.html?id=8467150"
stn <- "8467150"
webpage <- readLines(url, warn = FALSE)
### grep indicates that the station number is not present in the scraped text
grep(x = webpage, pattern = stn, value = TRUE)
Potential solutions
I am therefore looking for a new way to access my target text. I have tried using httr, but still cannot get all the html text to be included in what I scrape from the web page. The XML and rvest packages also seem promising, but I am not sure how to identify the relevant CSS selector or XPath expression.
### an attempt using httr
hDat <- httr::RETRY("GET", url, times = 10)
txt <- httr::content(hDat, "text")
### grep indicates that the station number is still not present
grep(x = txt, pattern = stn, value = TRUE)
### a partial attempt using XML
h <- xml2::read_html(url)
h2 <- XML::htmlTreeParse(h, useInternalNodes=TRUE, asText = TRUE)
### this may end up working, but I'm not sure how to identify the correct path
html.parse <- XML::xpathApply(h2, path = "div.span8", XML::xmlValue)
Regardless of the approach, I would welcome any suggestions that can help me access the header text containing the station name/number.
Unless you use Selenium, it will be very hard.
NOAA encourages you to access their free Restful json APIs. It also goes to great lengths to discourage html scraping.
That said, the following code will get what you want from a NOAA json in a data frame.
library(tidyverse)
library(jsonlite)
j1 <- fromJSON(txt = 'https://api.tidesandcurrents.noaa.gov/mdapi/prod/webapi/stations/8467150.json', simplifyDataFrame = TRUE, flatten = TRUE)
j1$stations %>% as_tibble() %>% select(name, state, id)
Results
# A tibble: 1 x 3
name state id
<chr> <chr> <chr>
1 Bridgeport CT 8467150

R: Is it possible to insert an "html file" into a MS Word Document/MS Powerpoint (Plotly)

I am using the R programming language. I am interested in seeing if it is somehow possible to take an html file (generated using the "plotly" library) and then insert that file into a MS Powerpoint slideshow.
I was looking at other stackoverflow questions where similar things were attempted: Exporting PNG files from Plotly in R
Suppose I generate this simple, interactive plot using plotly in R:
library(plotly)
x <- rnorm(100,10,10)
color <- rnorm(100, 2,1)
frame = data.frame(x,color)
p = plot_ly(type = "scatter", mode = "markers", data = frame, x = ~x, y = " ", color = ~color )
Using the "htmlwidget" library, I can save the object "p" as an html file. But is there a way to insert this object "p" into a MS Powerpoint presentation? Preferably, an offline way that does not use the internet or requires any additional software to be installed?
I tried using the "insert html" functionality in MS Powerpoint, but this just produces a "grey square" that doesn't load when you play the presentation.
Can someone please tell me if this is possible?
Thanks

How to import multiple Factiva html files by loop

I am using the Factiva Package ‘tm.plugin.factiva’ to import html files containing a Factiva search. It has worked beautifully so far, but now I have a problem with importing data and constructing a corpus from several html files (350 in total). I cannot figure out how to write a loop to iterate the simple step-by-step import code I have used before.
Earlier, with a smaller sample, I have managed to import the html files an a step-by-step process:
library(R.temis)
library(tm)
library(tm.plugin.factiva)
# Import corpus
source1 <- FactivaSource("Factiva1.html")
source2 <- FactivaSource("Factiva2.html")
source3 <- FactivaSource("Factiva3.html")
corp_source1 <- Corpus(source1, list(language=NA))
corp_source2 <- Corpus(source2, list(language=NA))
corp_source3 <- Corpus(source3, list(language=NA))
full_corpus <- c(corp_source1, corp_source2, corp_source3)
However, this is obviously not an option for the 350 html files. I have tried writing a loop for the import:
# Import corpus
files <- list.files(my_path)
for (i in files){
source <- FactivaSource(i)
}
tech_corpus <- Corpus(source, list(language=NA))
And:
htmlFiles <- Sys.glob("Factiva*.html")
for (k in 1:lengths(htmlFiles[[k]])){
source <- FactivaSource(htmlFiles[[k]])
}
But both of these only reads the first html file into the source, not the rest.
I have also tried:
for (k in seq_along(htmlFiles)){
source <- FactivaSource(htmlFiles[1:k], encoding = "UTF-8", format = c("HTML"))
}
But then I get the error that:
Error: x must be a string of length 1. I have tried manipulating the htmlFiles into a list (by: html_list <- as.list(htmlFiles)), but no change in result.
The two loops that did work, but only for the first html file.
I got the same result when I tried looping constructing the corpus as well.
for (m in 1:lengths(htmlFiles)){
corp_source <- Corpus(htmlFiles[[m]], list(language=NA))
}
Which worked, but only for the first html file. But then I get the error:
In 1:lengths(htmlFiles) :
numerical expression has 5 elements: only the first used
I would highly appreciate any help to understand how to get around this issue. Ideally, a loop to repeat the step-by-step process I did in the beginning would be super, as it seems to me that neither the FactivaSource() or Corpus() likes the complications I have made here - but I am far from an expert. Any help will be highly appreciated!

Why does readHTMLTable cannot successfully read premier league tables for May month?

The official Premier league website provides data with various statistics for league's teams over seasons (e.g. this one). I used the function readHTMLTable from XML R package to retrieve those tables. However, I noticed that the function can not read tables for May months while for others it works well. Here is an example:
april2007.url <- "http://www.premierleague.com/en-gb/matchday/league-table.html?season=2006-2007&month=APRIL&timelineView=date&toDate=1177887600000&tableView=CURRENT_STANDINGS"
april.df <- readHTMLTable(april2007.url, which = 1)
april.df[complete.cases(april.df),] ## correct table
march2014.url <- "http://www.premierleague.com/en-gb/matchday/league-table.html?season=2013-2014&month=APRIL&timelineView=date&toDate=1398639600000&tableView=CURRENT_STANDINGS"
march.df <- readHTMLTable(march2014.url, which = 1)
march.df[complete.cases(march.df), ] ## correct table
may2007.url <- "http://www.premierleague.com/en-gb/matchday/league-table.html?season=2006-2007&month=MAY&timelineView=date&toDate=1179010800000&tableView=CURRENT_STANDINGS"
may.df1 <- readHTMLTable(may2007.url, which = 1)
may.df1 ## Just data for the first team
may2014.url <- "http://www.premierleague.com/en-gb/matchday/league-table.html?season=2013-2014&month=MAY&timelineView=date&toDate=1399762800000&tableView=CURRENT_STANDINGS"
may.df2 <- readHTMLTable(may2014.url, which =1)
may.df2 ## Just data for the first team
As you can see, the function can not retrieve data for May month.
Please, can someone explain why this happens and how it can be fixed?
EDIT AFTER #zyurnaidi answer:
Below is the code that can do the job without manual editing.
url <- "http://www.premierleague.com/en-gb/matchday/league-table.html?season=2009-2010&month=MAY&timelineView=date&toDate=1273359600000&tableView=CURRENT_STANDINGS" ## data for the 09-05-2010.
con <- file (url)
raw <- readLines (con)
close (con)
pattern <- '<span class=" cupchampions-league= competitiontooltip= qualifiedforuefachampionsleague=' ## it seems that this part of the webpage source code mess the things up
raw <- gsub (pattern = pattern, replacement = '""', x = raw)
df <- readHTMLTable (doc = raw, which = 1)
df[complete.cases(df), ] ## correct table
OK. There are few hints for me to find the problem here:
1. The issues happen consistently on May. This is the last month of each season. It means that there should be something unique in this particular case.
2. Direct parsing (htmlParse, from both link and downloaded file) produces a truncated file. The table and html file are just suddenly closed after the first team in the table is reported.
The parsed data always differs from the original right after this point:
<span class=" cupchampions-league=
After downloading and carefully checking the html file itself, I found that there are (uncoded?) character issues there. My guess, this is caused by the cute little trophy icons seen after the team names.
Anyway, to solve this issue, you need to take out these error characters. Instead of editing the downloaded html files, my suggestion is:
1. View page source the EPL url for May's league table
2. Copy all and paste to the text editor, save as an html file
3. You can now use either htmlParse or readHTMLTable
There might be better way to automate this, but hope it can help.

Is there any R package to convert PDF to HTML [duplicate]

Is it possible to parse text data from PDF files in R? There does not appear to be a relevant package for such extraction, but has anyone attempted or seen this done in R?
In Python there is PDFMiner, but I would like to keep this analysis all in R if possible.
Any suggestions?
Linux systems have pdftotext which I had reasonable success with. By default, it creates foo.txt from a give foo.pdf.
That said, the text mining packages may have converters. A quick rseek.org search seems to concur with your crantastic search.
This is a very old thread, but for future reference: the pdftools R package extracts text from PDFs.
A colleague turned me on to this handy open-source tool: http://tabula.nerdpower.org/. Install, upload the PDF, and select the table in the PDF that requires data-ization. Not a direct solution in R, but certainly better than manual labor.
A purely R solution could be:
library('tm')
file <- 'namefile.pdf'
Rpdf <- readPDF(control = list(text = "-layout"))
corpus <- VCorpus(URISource(file),
readerControl = list(reader = Rpdf))
corpus.array <- content(content(corpus)[[1]])
then you'll have pdf lines in an array.
install.packages("pdftools")
library(pdftools)
download.file("http://www.nfl.com/liveupdate/gamecenter/56901/DEN_Gamebook.pdf",
"56901.DEN.Gamebook", mode = "wb")
txt <- pdf_text("56901.DEN.Gamebook")
cat(txt[1])
The tabula PDF table extractor app is based around a command line application based on a Java JAR package, tabula-extractor.
The R tabulizer package provides an R wrapper that makes it easy to pass in the path to a PDF file and get data extracted from data tables out.
Tabula will have a good go at guessing where the tables are, but you can also tell it which part of a page to look at by specifying a target area of the page.
Data can be extracted from multiple pages, and a different area can be specified for each page, if required.
For an example use case, see: When Documents Become Databases – Tabulizer R Wrapper for Tabula PDF Table Extractor.
I used an external utility to do the conversion and called it from R. All files had a leading table with the desired information
Set path to pdftotxt.exe and convert pdf to text
exeFile <- "C:/Projects/xpdfbin-win-3.04/bin64/pdftotext.exe"
for(i in 1:length(pdfFracList)){
fileNumber <- str_sub(pdfFracList[i], start = 1, end = -5)
pdfSource <- paste0(reportDir,"/", fileNumber, ".pdf")
txtDestination <- paste0(reportDir,"/", fileNumber, ".txt")
print(paste0("File number ", i, ", Processing file ", pdfSource))
system(paste(exeFile, "-table" , pdfSource, txtDestination, sep = " "), wait = TRUE)
}