I want to try and webscrape my own Stackoverflow Profiles! By this I mean, get an html link of every question I have ever asked:
https://stackoverflow.com/users/18181916/antonoyaro8
https://math.stackexchange.com/users/1024449/antonoyaro8
I tried to do this follows:
library(rvest)
library(httr)
library(XML)
url<-"https://stackoverflow.com/users/18181916/antonoyaro8?tab=questions&sort=newest"
page <-read_html(url)
resource <- GET(url)
parse <- htmlParse(resource)
links <- list(xpathSApply(parse, path="//a", xmlGetAttr, "href"))
I tried to pick up on a pattern and noticed that all links with questions have some number - so I tried to write a code that checks if elements in the list contain a number and keep these links:
rv <- c("1", "2", "3", "4", "5", "6", "7", "8", "9", "0")
final <- unique (grep(paste(rv,collapse="|"),
links, value=TRUE))
But I don't think I am doing this correctly - apart from the messy formatting, the final file is returning links that do not contain any numbers at all.
Can someone please show me how to webscrape these links properly, and then repeat this for all pages (e.g. https://stackoverflow.com/users/18181916/antonoyaro8?tab=questions&sort=newest, https://stackoverflow.com/users/18181916/antonoyaro8?tab=questions&sort=newest&page=2, https://stackoverflow.com/users/18181916/antonoyaro8?tab=questions&sort=newest&page=3)
Worse come to worst, if I can do it for one of these pages, I can manually copy/paste the code for all pages and proceed that way.
Thank you!
The output is a list of length 1. We need to extract ([[) the element before applying the grep
unique (grep(paste(rv,collapse="|"),
links[[1]], value=TRUE))
Note that the rv includes numbers 0 to 9 and it can match a digit if it is present anywhere in the link. If the intention is to subset the digits following the questions
grep("questions/\\d+", links[[1]], value = TRUE)
-output
[1] "/questions/72859976/recognizing-and-keeping-elements-containing-certain-patterns-in-a-list"
[2] "/questions/72843570/combing-two-selections-together"
[3] "/questions/72840913/selecting-rows-from-a-table-based-on-a-list"
[4] "/questions/72840624/even-out-table-in-r"
[5] "/questions/72840548/creating-a-dictionary-reference-table"
[6] "/questions/72837147/sequentially-replacing-factor-variables-with-numerical-values"
[7] "/questions/72822951/scanning-and-replacing-values-of-rows-in-r"
[8] "/questions/72822781/alternative-to-do-callrbind-data-frame-for-combining-a-list-of-data-frames"
[9] "/questions/72738885/referencing-a-query-in-another-query"
[10] "/questions/72725108/defining-cte-common-table-expressions-in-r"
[11] "/questions/72723768/creating-an-id-variable-on-the-spot"
[12] "/questions/72720013/selecting-data-using-conditions-stored-in-a-variable"
[13] "/questions/72717135/effecient-ways-to-append-sql-results-in-r"
...
If there are multiple pages, add the page= with paste or sprintf
urls <- c(url, sprintf("%s&page=%d", url, 2:3))
out_lst <- lapply(urls, function(url)
{
page <-read_html(url)
resource <- GET(url)
parse <- htmlParse(resource)
links <- list(xpathSApply(parse, path="//a", xmlGetAttr, "href"))
grep("questions/\\d+", links[[1]], value = TRUE)
})
-output
> out_lst
[[1]]
[1] "/questions/72859976/recognizing-and-keeping-elements-containing-certain-patterns-in-a-list"
[2] "/questions/72843570/combing-two-selections-together"
[3] "/questions/72840913/selecting-rows-from-a-table-based-on-a-list"
[4] "/questions/72840624/even-out-table-in-r"
[5] "/questions/72840548/creating-a-dictionary-reference-table"
[6] "/questions/72837147/sequentially-replacing-factor-variables-with-numerical-values"
[7] "/questions/72822951/scanning-and-replacing-values-of-rows-in-r"
[8] "/questions/72822781/alternative-to-do-callrbind-data-frame-for-combining-a-list-of-data-frames"
[9] "/questions/72738885/referencing-a-query-in-another-query"
[10] "/questions/72725108/defining-cte-common-table-expressions-in-r"
[11] "/questions/72723768/creating-an-id-variable-on-the-spot"
[12] "/questions/72720013/selecting-data-using-conditions-stored-in-a-variable"
[13] "/questions/72717135/effecient-ways-to-append-sql-results-in-r"
[14] "/questions/72710448/removing-files-from-global-environment-with-a-certain-pattern"
[15] "/questions/72710203/r-sql-is-the-default-option-sampling-with-replacement"
[16] "/questions/72695401/allocating-max-memory-in-r"
[17] "/questions/72681898/randomly-delete-columns-from-datasets"
[18] "/questions/72663516/are-rds-files-more-efficient-than-csv-files"
[19] "/questions/72625690/importing-files-using-list-files"
[20] "/questions/72623856/second-most-common-element-in-each-row"
[21] "/questions/72623744/counting-the-position-where-a-pattern-is-completed"
[22] "/questions/72620501/bulk-import-export-files-from-r"
[23] "/questions/72613413/counting-every-position-where-a-pattern-appears"
[24] "/questions/72612577/counting-the-position-of-the-first-0-in-each-row"
[25] "/questions/72607160/taking-averages-across-lists"
[26] "/questions/72589276/functions-for-finding-out-the-midpoint-interpolation"
[27] "/questions/72587298/sandwiching-values-between-rows"
[28] "/questions/72569338/integration-error-lengthlower-1-is-not-true"
[29] "/questions/72568817/synchronizing-nas-in-r"
[30] "/questions/72568661/finding-the-loser-in-each-row"
[[2]]
[1] "/questions/72566170/making-a-race-between-two-variables"
[2] "/questions/72418723/making-a-list-of-random-numbers"
[3] "/questions/72418364/random-uniform-numbers-without-runif"
[4] "/questions/72353102/integrate-normal-distribution-between-2-values"
[5] "/questions/72174868/placing-commas-between-names"
[6] "/questions/72163297/simulate-flipping-french-fries-in-r"
[7] "/questions/71982286/alternatives-to-the-partition-by-statement-in-sql"
[8] "/questions/71970960/converting-lists-into-data-frames"
[9] "/questions/71970672/random-numbers-are-too-similar-to-each-other"
[10] "/questions/71933753/making-combinations-of-items"
[11] "/questions/71874791/sorting-rows-in-specified-order"
[12] "/questions/71866097/hiding-the-legend-in-this-graph"
[13] "/questions/71866048/understanding-the-median-in-this-graph"
[14] "/questions/71852517/nas-produced-when-number-of-iterations-increase"
[15] "/questions/71791906/assigning-unique-colors-to-multiple-lines-on-a-graph"
[16] "/questions/71787336/finding-identical-rows-in-multiple-datasets"
[17] "/questions/71758983/multiple-replace-lookups"
[18] "/questions/71758648/create-ascending-id-in-a-data-frame"
[19] "/questions/71731208/webscraping-data-which-pokemon-can-learn-which-attacks"
[20] "/questions/71728273/webscraping-pokemon-data"
[21] "/questions/71683045/identifying-smallest-element-in-each-row-of-a-matrix"
[22] "/questions/71671488/connecting-all-nodes-together-on-a-graph"
[23] "/questions/71641774/overriding-colors-in-ggplot2"
[24] "/questions/71641404/applying-a-function-to-a-data-frame-lapply-vs-traditional-way"
[25] "/questions/71624111/sending-emails-from-r"
[26] "/questions/71623019/sql-joining-tables-from-2-different-servers-r-vs-sas"
[27] "/questions/71429265/overriding-sql-errors-during-r-uploads"
[28] "/questions/71429129/splitting-a-dataset-into-uneven-portions"
[29] "/questions/71418533/multiplying-and-adding-values-across-rows"
[30] "/questions/71417489/tricking-an-sql-server-to-accept-a-file-from-r"
[[3]]
[1] "/questions/71417218/splitting-a-dataset-into-arbitrary-sections"
[2] "/questions/71398804/plotting-vector-fields-and-gradient-fields"
[3] "/questions/71387596/animating-the-mandelbrot-set"
[4] "/questions/71358405/repeat-a-set-of-ids-for-every-n-rows"
[5] "/questions/71344822/time-series-graphs-with-different-symbols"
[6] "/questions/71341865/creating-a-data-frame-with-commas"
[7] "/questions/71287944/converting-igraph-to-visnetwork"
[8] "/questions/71282863/fixing-the-first-and-last-numbers-in-a-random-list"
[9] "/questions/71282403/adding-labels-to-graph-nodes"
[10] "/questions/71262761/understanding-list-and-do-call-commands"
[11] "/questions/71261431/adjusting-graph-layouts"
[12] "/questions/71255038/overriding-non-existent-components-in-a-loop"
[13] "/questions/71244872/fixing-cluttered-titles-on-graphs"
[14] "/questions/71243676/directly-adding-titles-and-labels-to-visnetwork"
[15] "/questions/71232353/removing-all-edges-in-igraph"
[16] "/questions/71230273/writing-a-function-that-references-elements-in-a-matrix"
[17] "/questions/71227260/generating-random-graphs-according-to-some-conditions"
[18] "/questions/71087349/adding-combinations-of-numbers-in-a-matrix"
Related
I want to get list of Phase state from site. I make such code:
library("rvest")
library("magrittr")
url <- 'https://energybase.ru/en/oil-gas-field/index'
read_html(url) %>%
html_nodes(".info")%>%
html_children()%>%
html_children()
and I got:
[1] <small>City</small>
[2] <div class="value">Игарка</div>
[3] <small>Phase state</small>
[4] <div class="value">нефтегазовое</div>
[5] <small>Извлекаемые запасы A+B1+B2+C1</small>
[6] <div class="value">479.10 mln. tons</div>
[7] <small>City</small>
[8] <div class="value">Тазовский</div>
[9] <small>Phase state</small>
[10] <div class="value">газонефтяное</div>
[11] <small>Извлекаемые запасы A+B1+B2+C1</small>
[12] <div class="value">422.00 mln. tons</div>
[13] <small>City</small>
[14] <div class="value">Лянтор</div>
[15] <small>Phase state</small>
[16] <div class="value">нефтегазоконденсатное</div>
[17] <small>Извлекаемые запасы A+B1+B2+C1</small>
[18] <div class="value">380.00 mln. tons</div>
[19] <small>City</small>
[20] <div class="value">Тобольск</div>
I want to get all notes after
<div class="value">
the result should be:
нефтегазовое
газонефтяное
нефтегазоконденсатное
and so on. What function should I use to solve my problem?
You can use
read_html(url) %>%
html_nodes(".col-md-8:nth-child(2) .value") %>%
html_text
to get
[1] "нефтегазовое" "газонефтяное" "нефтегазоконденсатное" "нефтяное"
[5] "нефтяное" "нефтегазовое" "нефтяное" "нефтяное"
[9] "нефтяное" "нефтегазоконденсатное" "нефтегазоконденсатное" "нефтяное"
[13] "нефтегазоконденсатное" "нефтегазоконденсатное" "нефтяное" "нефтяное"
[17] "газонефтяное" "нефтегазоконденсатное" "нефтяное" "нефтегазовое"
A very good tool to get the right css-selector (.col-md-8:nth-child(2) .value) is https://selectorgadget.com/ - here the screenshot for your example:
You could just pull from the dropdown options then you get the unique list without repeats. Depends if you want full list with repeats or not.
library(rvest)
library(magrittr)
phases <- (read_html('https://energybase.ru/en/oil-gas-field/index') %>%
html_nodes('#fieldsearch-phase option') %>%
html_text())[-1]
I've searched through many rvest scraping posts but can't find an example like mine. I'm following the R vignette example (https://blog.rstudio.com/2014/11/24/rvest-easy-web-scraping-with-r/) for selectorgadget, but inputting my use case as necessary. None of selector gadget's suggestions get me what I need. I need to extract the name for each review on the page. A sample of what the name looks like under the hood is as follows:
<span itemprop="name" class="sg_selected">This Name</span>
Here's my code to this point. Ideally, this code should get me the individual names on this web page.
library(rvest)
library(dplyr)
dsa_reviews <-
read_html("https://www.directsalesaid.com/companies/traveling-
vineyard#reviews")
review_names <- html_nodes(dsa_reviews,'#reviews span')
df <- bind_rows(lapply(xml_attrs(review_names), function(x)
data.frame(as.list(x), stringsAsFactors=FALSE)))
Apologies if this is a duplicate question or if it's not formatted correctly. Please feel free to request any necessary edits.
Here it is :
library(rvest)
library(dplyr)
dsa_reviews <-
read_html("https://www.directsalesaid.com/companies/traveling-vineyard#reviews")
html_nodes(dsa_reviews,'[itemprop=name]') %>%
html_text()
[1] "Traveling Vineyard" ""
[3] "Kiersten Ray-kuhn" "Miley Sama"
[5] " Nancy Shawtone " "Amanda Moore"
[7] "Matt" "Kathy Barzal"
[9] "Lesa Brinker" "Lori Stryker"
[11] "Jeanette Holtman" "Penny Notarnicola"
[13] "Laura Ann" "Nicole Lafave"
[15] "Gretchen Hess Miller" "Gina Devine"
[17] "Ashley Lawton Converse" "Morgan Williams"
[19] "Angela Baston Mckeone" "Traci Feshler"
[21] "Kisha Marshall Dlugos" "Jody Cole Dvorak"
Colin
url <-"http://news.chosun.com/svc/content_view/content_view.html?contid=1999080570392"
hh = read_html(GET(url),encoding = "EUC-KR")
#guess_encoding(hh)
html_text(html_node(hh, 'div.par'))
#html_text(html_nodes(hh ,xpath='//*[#id="news_body_id"]/div[2]/div[3]'))
I'm trying to crawling the news data(just for practice) using rvest in R.
When I tried it on the homepage above, I failed to fetch the text from the page.
(Xpath doesn't work either.)
I do not think I failed to find the link that contain texts that I want to get on the page. But when I try to extract the text from that link using html_text function, it is extracted as "" or blanks.
I can't find why.. I don't have any experience with HTML and crawling.
What I'm guessing is the HTML tag that contain news body contexts, has "class" and "data-dzo"(I don't know what is it).
So If anyone tell me how to solve it or let me know the search keywords that I can find on google to solve this problem.
It builds quite a bit of the page dynamically. This should help.
The article content is in an XML file. The URL can be constructed from the contid parameter. Either pass in a full article HTML URL (like the one in your example) or just the contid value to this and it'll return an xml2 xml_document with the parsed XML results:
#' Retrieve article XML from chosun.com
#'
#' #param full_url_or_article_id either a full URL like
#' `http://news.chosun.com/svc/content_view/content_view.html?contid=1999080570392`
#' or just the id (e.g. `1999080570392`)
#' #return xml_document
read_chosun_article <- function(full_url_or_article_id) {
require(rvest)
require(httr)
full_url_or_article_id <- full_url_or_article_id[1]
if (grepl("^http", full_url_or_article_id)) {
contid <- httr::parse_url(full_url_or_article_id)
contid <- contid$query$contid
} else {
contid <- full_url_or_article_id
}
# The target article XML URLs are in the following format:
#
# http://news.chosun.com/priv/data/www/news/1999/08/05/1999080570392.xml
#
# so we need to construct it from substrings in the 'contid'
sprintf(
"http://news.chosun.com/priv/data/www/news/%s/%s/%s/%s.xml",
substr(contid, 1, 4), # year
substr(contid, 5, 6), # month
substr(contid, 7, 8), # day
contid
) -> contid_xml_url
res <- httr::GET(contid_xml_url)
httr::content(res)
}
read_chosun_article("http://news.chosun.com/svc/content_view/content_view.html?contid=1999080570392")
## {xml_document}
## <content>
## [1] <id>1999080570392</id>
## [2] <site>\n <id>1</id>\n <name><![CDATA[www]]></name>\n</site>
## [3] <category>\n <id>3N1</id>\n <name><![CDATA[사람들]]></name>\n <path ...
## [4] <type>0</type>
## [5] <template>\n <id>2006120400003</id>\n <fileName>3N.tpl</fileName> ...
## [6] <date>\n <created>19990805192041</created>\n <createdFormated>199 ...
## [7] <editor>\n <id>chosun</id>\n <email><![CDATA[webmaster#chosun.com ...
## [8] <source><![CDATA[0]]></source>
## [9] <title><![CDATA[[동정] 이철승, 순국학생 위령제 지내 등]]></title>
## [10] <subTitle/>
## [11] <indexTitleList/>
## [12] <authorList/>
## [13] <masterId>1999080570392</masterId>
## [14] <keyContentId>1999080570392</keyContentId>
## [15] <imageList count="0"/>
## [16] <mediaList count="0"/>
## [17] <body count="1">\n <page no="0">\n <paragraph no="0">\n <t ...
## [18] <copyright/>
## [19] <status><![CDATA[RL]]></status>
## [20] <commentBbs>N</commentBbs>
## ...
read_chosun_article("1999080570392")
## {xml_document}
## <content>
## [1] <id>1999080570392</id>
## [2] <site>\n <id>1</id>\n <name><![CDATA[www]]></name>\n</site>
## [3] <category>\n <id>3N1</id>\n <name><![CDATA[사람들]]></name>\n <path ...
## [4] <type>0</type>
## [5] <template>\n <id>2006120400003</id>\n <fileName>3N.tpl</fileName> ...
## [6] <date>\n <created>19990805192041</created>\n <createdFormated>199 ...
## [7] <editor>\n <id>chosun</id>\n <email><![CDATA[webmaster#chosun.com ...
## [8] <source><![CDATA[0]]></source>
## [9] <title><![CDATA[[동정] 이철승, 순국학생 위령제 지내 등]]></title>
## [10] <subTitle/>
## [11] <indexTitleList/>
## [12] <authorList/>
## [13] <masterId>1999080570392</masterId>
## [14] <keyContentId>1999080570392</keyContentId>
## [15] <imageList count="0"/>
## [16] <mediaList count="0"/>
## [17] <body count="1">\n <page no="0">\n <paragraph no="0">\n <t ...
## [18] <copyright/>
## [19] <status><![CDATA[RL]]></status>
## [20] <commentBbs>N</commentBbs>
## ...
NOTE: I poked around that site to see this violates their terms of service and it does not seem to but I also relied on google translate and it may have made that harder to find. It's important to ensure you can legally (and, ethically, if you care about ethics) scrape this content for whatever use you intend.
I would love to use the tidyjson package as it seems to have very clear instructions on how to use it.
However, I am having a few issues. Could you please help check and let me know if these are user issues or something else.
I am using the world_bank.json data downloaded from http://jsonstudio.com/resources/
worldbank <- fromJSON(file = "world_bank.json")
I do see that a list of 50 in Rstudio. However, when I try to use read_json, I get the below error.
> read_json(worldbank, format = "json")
Error in file.info(path) : invalid filename argument
> worldbank[[1]] %>% prettify
Error: parse error: trailing garbage
52b213b38594d8a2be17c780
(right here) ------^
Use jsonlite::stream_in as lizzy suggested with stream unzip:
> download.file("http://jsonstudio.com/wp-content/uploads/2014/02/world_bank.zip", "world_bank.zip")
> world_bank <- jsonlite::stream_in(unz("world_bank.zip", "world_bank.json"))
> names(world_bank)
[1] "_id" "approvalfy" "board_approval_month"
[4] "boardapprovaldate" "borrower" "closingdate"
[7] "country_namecode" "countrycode" "countryname"
[10] "countryshortname" "docty" "envassesmentcategorycode"
[13] "grantamt" "ibrdcommamt" "id"
[16] "idacommamt" "impagency" "lendinginstr"
[19] "lendinginstrtype" "lendprojectcost" "majorsector_percent"
[22] "mjsector_namecode" "mjtheme" "mjtheme_namecode"
[25] "mjthemecode" "prodline" "prodlinetext"
[28] "productlinetype" "project_abstract" "project_name"
[31] "projectdocs" "projectfinancialtype" "projectstatusdisplay"
[34] "regionname" "sector" "sector1"
[37] "sector2" "sector3" "sector4"
[40] "sector_namecode" "sectorcode" "source"
[43] "status" "supplementprojectflg" "theme1"
[46] "theme_namecode" "themecode" "totalamt"
[49] "totalcommamt" "url"
First of all I'm really beginner with web-scraping.
So work on this wesite. I try to get the links to the next webpages with discussion about the espisode. With the SelectorGadget I managed to get only the part of the html with the frame with the topics
html.s1e01 <- html("http://asoiaf.westeros.org/index.php/forum/41-e01-winter-is-coming/")
html.s1e01.page <- html_nodes(html.s1e01, ".ipsBox")
Now I want to get all links to the topics, so I tried
html_attr(html.s1e01.page, "href")
but I get NA. I saw similar examples on the Internet and it should work. Any suggestion why it does not?
html.s1e01.page <- html_nodes(html.s1e01, ".ipsBox .topic_title")
html.s1e01.topics <- html.s1e01.page %>% html_attr("href")
html.s1e01.topics
## [1] "http://asoiaf.westeros.org/index.php/topic/49408-poll-how-would-you-rate-episode-101/"
## [2] "http://asoiaf.westeros.org/index.php/topic/109202-death-of-john-aryn-season-4-episode-5-spoilers/"
## [3] "http://asoiaf.westeros.org/index.php/topic/49310-book-spoilers-episode-101-take-3/"
## [4] "http://asoiaf.westeros.org/index.php/topic/90902-sir-john-standingjonarryn/"
## [5] "http://asoiaf.westeros.org/index.php/topic/106105-did-anyone-notice-the-color-of-the-feather-in-lyannas-tomb/"
## [6] "http://asoiaf.westeros.org/index.php/topic/49116-book-tv-spoilers-what-was-left-out-and-what-was-left-in/"
## [7] "http://asoiaf.westeros.org/index.php/topic/49070-no-spoilers-ep101-discussion/"
## [8] "http://asoiaf.westeros.org/index.php/topic/49159-book-spoilers-the-book-was-better/"
## [9] "http://asoiaf.westeros.org/index.php/topic/57614-runes-in-agot-spoilers-i-suppose/"
## [10] "http://asoiaf.westeros.org/index.php/topic/49151-book-spoilers-ep101-discussion-mark-ii/"
## [11] "http://asoiaf.westeros.org/index.php/topic/49161-booktv-spoilers-dany-drogo/"
## [12] "http://asoiaf.westeros.org/index.php/topic/49071-book-spoilers-ep101-discussion/"
## [13] "http://asoiaf.westeros.org/index.php/topic/49100-no-spoilers-pre-airing-discussion/"