How to load online JSON data to shiny app with jsonlite? - json

I am trying to make shiny app that takes data from this api: https://www.riigiteenused.ee/api/et/all. I need to use jsonlite::fromJSON, because it has good flatten function. When I use the following code (minimal example, in real life I do more stuff with data):
library(jsonlite)
data=fromJSON("https://www.riigiteenused.ee/api/et/all")
server <- function(input, output) {
output$tekst <- renderText({
nchar(data)
})
}
ui <- fluidPage(
sidebarLayout(
sidebarPanel(),
mainPanel(textOutput("tekst"))
))
shinyApp(ui = ui, server = server)
I get following error message:
Error in open.connection(con, "rb") :
Peer certificate cannot be authenticated with given CA certificates
I tried the following (go around ssl verify peer):
library(RCurl)
raw <- getURL("https://www.riigiteenused.ee/api/et/all",
.opts = list(ssl.verifypeer = FALSE), crlf = TRUE)
data=fromJSON(raw)
It reads in raw data, but messes up JSON (validate(raw) shows lexical error: invalid character \n inside string, which is causing following error):
Error: lexical error: invalid character inside string.
ressile: laevaregister#vta.ee. Avaldusele soovitatavalt lis
(right here) ------^
Also one idea I tried was:
data=fromJSON(readLines("https://www.riigiteenused.ee/api/et/all"))
It works fine in my computer, but when I upload it to shinyapps.io app doesn't work and from logs I see error:
Error in file(con, "r") : https:// URLs are not supported
Could somebody give me a clue, if there is a way to load JSON data from https toshiny app using jsonlite fromJSON function?
My session info is following:
R version 3.2.2 (2015-08-14)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 8 x64 (build 9200)
locale:
[1] LC_COLLATE=Estonian_Estonia.1257 LC_CTYPE=Estonian_Estonia.1257
[3] LC_MONETARY=Estonian_Estonia.1257 LC_NUMERIC=C
[5] LC_TIME=Estonian_Estonia.1257
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] jsonlite_0.9.19 httr_1.0.0 RCurl_1.95-4.7 bitops_1.0-6 shiny_0.12.2
loaded via a namespace (and not attached):
[1] Rcpp_0.12.2 digest_0.6.8 mime_0.4 R6_2.1.1
[5] xtable_1.7-4 magrittr_1.5 stringi_1.0-1 curl_0.9.4
[9] tools_3.2.2 stringr_1.0.0 httpuv_1.3.3 rsconnect_0.4.1.4
[13] htmltools_0.2.6

don't skip ssl, try
fromJSON(content(GET("https://www.riigiteenused.ee/api/et/all"), "text"))

I tried this solution that worked fine in my computer and in shiny server:
library(rjson)
library(jsonlite)
fromJSON(url, flatten=T)

Related

Incorporating Sliders into a Leaflet Map

I was following this tutorial here (https://rstudio.github.io/crosstalk/) and tried to run the code:
library(crosstalk)
library(leaflet)
library(DT)
# Wrap data frame in SharedData
sd <- SharedData$new(quakes[sample(nrow(quakes), 100),])
# Create a filter input
filter_slider("mag", "Magnitude", sd, column=~mag, step=0.1, width=250)
# Use SharedData like a dataframe with Crosstalk-enabled widgets
map = bscols(
leaflet(sd) %>% addTiles() %>% addMarkers(),
datatable(sd, extensions="Scroller", style="bootstrap", class="compact", width="100%",
options=list(deferRender=TRUE, scrollY=300, scroller=TRUE))
)
The map seems to render, but the "interactive slider" does not appear:
Also, I can not seem to save this map:
library(htmlwidgets)
saveWidget(map, "map.html", selfcontained = F, libdir = "lib")
Error in .getNamespace(pkg) :
invalid type/length (symbol/0) in vector allocation
I heard that the slider might require installing some further add-ons, but I have not been able to find out how to do this.
Does anyone know what I can do to resolve this problem?
Thank you!
> sessionInfo()
R version 4.1.3 (2022-03-10)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 22000)
Matrix products: default
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] DT_0.23 leaflet_2.1.1 crosstalk_1.2.0
loaded via a namespace (and not attached):
[1] jquerylib_0.1.4 pillar_1.7.0 compiler_4.1.3 bslib_0.3.1 tools_4.1.3 digest_0.6.29 lubridate_1.8.0
[8] jsonlite_1.8.0 lifecycle_1.0.1 tibble_3.1.6 pkgconfig_2.0.3 rlang_1.0.2 cli_3.3.0 DBI_1.1.2
[15] yaml_2.3.5 xfun_0.30 fastmap_1.1.0 dplyr_1.0.9 stringr_1.4.0 generics_0.1.3 vctrs_0.4.1
[22] htmlwidgets_1.5.4 sass_0.4.1 hms_1.1.1 tidyselect_1.1.2 glue_1.6.2 R6_2.5.1 fansi_1.0.3
[29] purrr_0.3.4 tidyr_1.2.0 readr_2.1.2 tzdb_0.3.0 magrittr_2.0.2 ellipsis_0.3.2 htmltools_0.5.2
[36] assertthat_0.2.1 utf8_1.2.2 tinytex_0.40 stringi_1.7.6 lazyeval_0.2.2 crayon_1.5.1

Shiny not displaying table with HTML/JSON error message

I'm trying to put together a simply shiny app that will send a search request, return a data frame and display it in the UI. When I run the app, everything appears to be functioning correctly at first but when I run a query I get an html/json error.
Here is the code:
ui <- fluidPage(
# Application title
titlePanel("My App"),
sidebarLayout(
sidebarPanel(
textInput('dataset_name',
'Dataset:',
placeholder = 'Name')
,
br(),
actionButton("button", "Search"),
),
mainPanel(
tableOutput('userTable')
),
position = c("left"),
fluid=FALSE
)
)
server <- function(input, output) {
ut.df <- eventReactive(input$button, {
ds <- dataSearch(input$datset_name)
return(ds)
})
output$userTable <- renderTable({ut.df()})
}
dataSearch is the function I've created to send the input$dataset_name value to an api call and return a dataframe of the results. I've tested the function and it parses the response JSON and returns the dataframe without issue.
When I run the shiny app the page loads with no problem but when I submit a query, instead of rendering the data frame as a table I get:
Warning: Error in : lexical error: invalid char in json text.
<!DOCTYPE HTML PUBLIC "-//W3C//
(right here) ------^
Can anyone explain why the table is not being rendered and why shiny seems to think the html code is a json file?
Session info:
R version 4.1.2 (2021-11-01)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 19042)
Matrix products: default
locale:
[1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United States.1252
[3] LC_MONETARY=English_United States.1252 LC_NUMERIC=C
[5] LC_TIME=English_United States.1252
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] DT_0.20 jsonlite_1.7.2 httr_1.4.2 shiny_1.7.1
loaded via a namespace (and not attached):
[1] Rcpp_1.0.7 jquerylib_0.1.4 bslib_0.3.1 compiler_4.1.2
[5] pillar_1.6.4 later_1.3.0 neo4r_0.1.1 tools_4.1.2
[9] digest_0.6.28 lattice_0.20-45 lifecycle_1.0.1 tibble_3.1.6
[13] png_0.1-7 pkgconfig_2.0.3 rlang_0.4.12 Matrix_1.3-4
[17] cli_3.1.0 rstudioapi_0.13 crosstalk_1.2.0 yaml_2.2.1
[21] curl_4.3.2 fastmap_1.1.0 withr_2.4.2 dplyr_1.0.7
[25] htmlwidgets_1.5.4 sass_0.4.0 rappdirs_0.3.3 generics_0.1.1
[29] vctrs_0.3.8 rprojroot_2.0.2 grid_4.1.2 attempt_0.3.1
[33] tidyselect_1.1.1 fontawesome_0.2.2 here_1.0.1 reticulate_1.22
[37] glue_1.5.0 data.table_1.14.2 R6_2.5.1 fansi_0.5.0
[41] purrr_0.3.4 tidyr_1.1.4 magrittr_2.0.1 promises_1.2.0.1
[45] ellipsis_0.3.2 htmltools_0.5.2 mime_0.12 xtable_1.8-4
[49] httpuv_1.6.3 utf8_1.2.2 cachem_1.0.6 crayon_1.4.2
This error means that the document you're trying to read with {jsonlite} is not a JSON file, but an HTML file.
For example, you can reproduce this error with:
> jsonlite::read_json("https://google.com")
Error in parse_con(txt, bigint_as_char) :
lexical error: invalid char in json text.
<!DOCTYPE html><html lang="fr"
(right here) ------^
So you need to make sure that the JSON you're reading is correct.
Colin

Trying to find hyperlinks by scraping

So I am fairly new to the topic of webscraping. I am trying to find all the hyperlinks that the html code of the following page contains:
https://www.exito.com/mercado/lacteos-huevos-y-refrigerados/leches
So this is what I tried:
url <- "https://www.exito.com/mercado/lacteos-huevos-y-refrigerados/leches"
webpage <- read_html(url)
html_attr(html_nodes(webpage, "a"), "href")
The result only contains like 6 links but just by viewing the page you can see that there are a lot more of hyperlinks.
For example the code behind the first image has something like: <a href="/leche-entera-sixpack-en-bolsa-x-11-litros-cu-807650/p" class="vtex-product-summary-2-x-clearLink h-100 flex flex-column"> ...
What am I doing wrong?
You won't be able to get the a tags you're after because that part of the website is not visible to html/xml parsers. This is because it's a dynamic part of the website that changes if you choose another part of the website; the only 'static' part of the website is the top header, which is why you only got 6 a tags: the six a tags from the header.
For this, we need to mimic the behavior of a browser (firefox, chrome, etc...), go into the website (note that we're not entering the website as an html/xml parser but as a 'user' through a browser) and read the html/xml source code from there.
For this we'll need the R package RSelenium. Make sure you install it correctly together with docker, as none of the code below can work without it.
After you install RSelenium and docker, run docker run -d -p 4445:4444 selenium/standalone-firefox:2.53.1 from your terminal (if on Linux, you can run this the terminal; if on Windows you'll have to download a docker terminal, run it there). After that you're all set to reproduce the code below.
Why you're approach didn't work
We need to access the 5th div tag from the image below:
As you can see, this 5th div tag has three dots (...) inside, denoting that there's code inside: this is precisely where all of the bottom part of the website is (including the a tags that you're after). If we tried to access this 5th tag using rvest or xml2, we won't find anything:
library(xml2)
library(dplyr)
#>
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#>
#> filter, lag
#> The following objects are masked from 'package:base':
#>
#> intersect, setdiff, setequal, union
lnk <- "https://www.exito.com/mercado/lacteos-huevos-y-refrigerados/leches?page=2"
# Note how the 5th div element is empty and it should contain the lower
# part of the website
lnk %>%
read_html() %>%
xml_find_all("//div[#class='flex flex-grow-1 w-100 flex-column']") %>%
xml_children()
#> {xml_nodeset (6)}
#> [1] <div class=""></div>\n
#> [2] <div class=""></div>\n
#> [3] <div class=""></div>\n
#> [4] <div class=""></div>\n
#> [5] <div class=""></div>\n
#> [6] <div class=""></div>
Note how the 5th div tag doesn't have any code inside. A simple html/xml parser won't catch it.
How it can work
We need to use RSelenium. After you've installed everything correctly, we need to setup a 'remote driver', open it and navigate to the website. All of these steps are just to make sure that we're coming into the website as a 'normal' user from a browser. This will make sure that we can access the rendered code that we actually see when we enter the website. Below are the detailed steps from entering the website and constructing the links.
# Make sure you install docker correctly: https://docs.ropensci.org/RSelenium/articles/docker.html
library(RSelenium)
# After installing docker and before running the code, make sure you run
# the rselenium docker image: docker run -d -p 4445:4444 selenium/standalone-firefox:2.53.1
# Now, set up your remote driver
remDr <- remoteDriver(
remoteServerAddr = "localhost",
port = 4445L,
browserName = "firefox"
)
# Initiate the driver
remDr$open(silent = TRUE)
# Navigate to the exito.com website
remDr$navigate(lnk)
prod_links <-
# Get the html source code
remDr$getPageSource()[[1]] %>%
read_html() %>%
# Find all a tags which have a certain class
# I searched for this tag manually on the website code and saw that all products
# had an a tag that shared the same class
xml_find_all("//a[#class='vtex-product-summary-2-x-clearLink h-100 flex flex-column']") %>%
# Extract the href attribute
xml_attr("href") %>%
paste0("https://www.exito.com", .)
prod_links
#> [1] "https://www.exito.com/leche-semidescremada-deslactosada-en-bolsa-x-900-ml-145711/p"
#> [2] "https://www.exito.com/leche-entera-en-bolsa-x-900-ml-145704/p"
#> [3] "https://www.exito.com/leche-entera-sixpack-x-1300-ml-cu-987433/p"
#> [4] "https://www.exito.com/leche-deslactosada-en-caja-x-1-litro-878473/p"
#> [5] "https://www.exito.com/leche-polvo-deslactos-semidesc-764522/p"
#> [6] "https://www.exito.com/leche-slight-sixpack-en-caja-x-1050-ml-cu-663528/p"
#> [7] "https://www.exito.com/leche-semidescremada-sixpack-en-caja-x-1050-ml-cu-663526/p"
#> [8] "https://www.exito.com/leche-descremada-sixpack-x-1300-ml-cu-563046/p"
#> [9] "https://www.exito.com/of-leche-deslact-pag-5-lleve-6-439057/p"
#> [10] "https://www.exito.com/sixpack-de-leche-descremada-x-1100-ml-cu-414454/p"
#> [11] "https://www.exito.com/leche-en-polvo-klim-fortificada-360g-239085/p"
#> [12] "https://www.exito.com/leche-deslactosada-descremada-en-caja-x-1-litro-238291/p"
#> [13] "https://www.exito.com/leche-deslactosada-en-caja-x-1-litro-157334/p"
#> [14] "https://www.exito.com/leche-entera-larga-vida-en-caja-x-1-litro-157332/p"
#> [15] "https://www.exito.com/leche-en-polvo-klim-fortificada-780g-138121/p"
#> [16] "https://www.exito.com/leche-entera-en-bolsa-x-1-litro-125079/p"
#> [17] "https://www.exito.com/leche-entera-en-bolsa-sixpack-x-11-litros-cu-59651/p"
#> [18] "https://www.exito.com/leche-deslactosada-descremada-sixpack-x-11-litros-cu-22049/p"
#> [19] "https://www.exito.com/leche-entera-en-polvo-instantanea-x-760-gr-835923/p"
#> [20] "https://www.exito.com/of-alpin-cja-cho-pag9-llev12/p"
Hope this answers your questions
The data, including the urls, are returned dynamically from a GraphQL query you can observe in the network tab when clicking Mostrar más on the page. This is why the content is not present in your initial query - it has not yet been requested.
XHR for the product info
The relevant XHR in the network tab of dev tools:
The actual query params of the url query string:
You can do away with most of the request info. What you do need is the extensions param. More specifically, you need to provide the sha256Hash and the base64 encoded string value associated with the variables key in the persistedQuery.
The SHA256 Hash
The appropriate hash can be extracted from at least one of the js files which essentially governs the set up. An example file you can use is:
https://exitocol.vtexassets.com/_v/public/assets/v1/published/bundle/public/react/asset.min.js?v=1&files=vtex.store-resources#0.38.0,OrderFormContext,Mutations,Queries,PWAContext&files=exitocol.store-components#0.0.2,common,11,3,SearchBar&files=vtex.responsive-values#0.2.0,common,useResponsiveValues&files=vtex.slider#0.7.3,common,0,Dots,Slide,Slider,SliderContainer&files=exito.components#4.0.7,common,0,1,3,4&workspace=master.
The query hash can be regex'd from the response text of an xhr request to this uri. The regex is explained here and the first match is sufficient:
To apply in R, with stringr, you will need some extra escapes in e.g. \s becomes \\s.
The Base64 encoded product query
The base64 encoded string you can generate yourself with the appropriate library e.g. it seems there is a base64encode R function in caTools package.
The encoded string looks like (depending on page/result batch):
eyJ3aXRoRmFjZXRzIjpmYWxzZSwiaGlkZVVuYXZhaWxhYmxlSXRlbXMiOmZhbHNlLCJza3VzRmlsdGVyIjoiQUxMX0FWQUlMQUJMRSIsInF1ZXJ5IjoiMTQ4IiwibWFwIjoicHJvZHVjdENsdXN0ZXJJZHMiLCJvcmRlckJ5IjoiT3JkZXJCeVRvcFNhbGVERVNDIiwiZnJvbSI6MjAsInRvIjozOX0=
Decoded:
{"withFacets":false,"hideUnavailableItems":false,"skusFilter":"ALL_AVAILABLE","query":"148","map":"productClusterIds","orderBy":"OrderByTopSaleDESC","from":20,"to":39}
The from and to params are the offsets for the results batches of products which come in batches of twenty. So, you can write functions which return the appropriate sha256 hash and send a subsequent request for product info where you base64 encode, with the appropriate library, the string above and alter the from and to params as required. Potentially others as well (have a play!).
The xhr response:
The response is json so you might need a json library (e.g. jsonlite) to handle the result (UPDATE: Seems you don't with R and httr). You can extract the links from a list of dictionaries nested within result['data']['products'], as per Python example, where result is the json object retrieved from the xhr with from and to params.
Examples:
Examples using R and Python are shown below (N.B. I am less familiar with R). The above has been kept fairly language agnostic.
Bear in mind, whilst I am extracting the urls, the json returned has a lot more info including product title, price, image info etc.
Example output:
TODO:
Add in error handling
Use Session objects to benefit from re-use of underlying tcp connection especially if making multiple requests to get all products
Add in functionality to return total product number and loop structure to retrieve all (Python example might benefit from decorator)
R (a quick first go):
library(purrr)
library(stringr)
library(caTools)
library(httr)
get_links <- function(sha, start, end){
string = paste0('{"withFacets":false,"hideUnavailableItems":false,"skusFilter":"ALL_AVAILABLE","query":"148","map":"productClusterIds","orderBy":"OrderByTopSaleDESC","from":' , start , ',"to":' , end , '}')
base64encoded <- caTools::base64encode(string)
params = list(
'extensions' = paste0('{"persistedQuery":{"version":1,"sha256Hash":"' , sha , '","sender":"vtex.store-resources#0.x","provider":"vtex.search-graphql#0.x"},"variables":"' , base64encoded , '"}')
)
product_info <- content(httr::GET(url = 'https://www.exito.com/_v/segment/graphql/v1', query = params))$data$products
links <- map(product_info, ~{
.x %>% .$link
})
return(links)
}
start <- '0'
end <- '19'
sha <- httr::GET('https://exitocol.vtexassets.com/_v/public/assets/v1/published/bundle/public/react/asset.min.js?v=1&files=vtex.store-resources#0.38.0,OrderFormContext,Mutations,Queries,PWAContext&files=exitocol.store-components#0.0.2,common,11,3,SearchBar&files=vtex.responsive-values#0.2.0,common,useResponsiveValues&files=vtex.slider#0.7.3,common,0,Dots,Slide,Slider,SliderContainer&files=exito.components#4.0.7,common,0,1,3,4&workspace=master') %>%
content(., as = "text")%>% str_match(.,'query\\s+productSearch.*?hash:\\s+"(.*?)"')%>% .[[2]]
links <- get_links(sha, start, end)
print(links)
Py:
import requests, base64, re, json
def get_sha():
r = requests.get('https://exitocol.vtexassets.com/_v/public/assets/v1/published/bundle/public/react/asset.min.js?v=1&files=vtex.store-resources#0.38.0,OrderFormContext,Mutations,Queries,PWAContext&files=exitocol.store-components#0.0.2,common,11,3,SearchBar&files=vtex.responsive-values#0.2.0,common,useResponsiveValues&files=vtex.slider#0.7.3,common,0,Dots,Slide,Slider,SliderContainer&files=exito.components#4.0.7,common,0,1,3,4&workspace=master')
p = re.compile(r'query\s+productSearch.*?hash:\s+"(.*?)"') #https://regex101.com/r/VdC27H/5
sha = p.findall(r.text)[0]
return sha
def get_json(sha, start, end):
#these 'from' and 'to' values correspond with page # as pages cover batches of 20 e.g. start 20 end 39
string = '{"withFacets":false,"hideUnavailableItems":false,"skusFilter":"ALL_AVAILABLE","query":"148","map":"productClusterIds","orderBy":"OrderByTopSaleDESC","from":' + start + ',"to":' + end + '}'
base64encoded = base64.b64encode(string.encode('utf-8')).decode()
params = (('extensions', '{"persistedQuery":{"sha256Hash":"' + sha + '","sender":"vtex.store-resources#0.x","provider":"vtex.search-graphql#0.x"},"variables":"' + base64encoded + '"}'),)
r = requests.get('https://www.exito.com/_v/segment/graphql/v1',params=params)
return r.json()
def get_links(sha, start, end):
result = get_json(sha, start, end)
links = [i['link'] for i in result['data']['products']]
return links
sha = get_sha()
links = get_links(sha, '0', '19')
#print(len(links))
print(links)

Accessing the internet with knitr

When I run some code to draw a map using ggmap, in Rstudio it runs fine. When I run it using knitr it fails with the following error message:-
Error in download.file(url, destfile = destfile, quiet = !messaging, mode = "wb") :
cannot open URL 'http://maps.googleapis.com/maps/api/staticmap?center=-40.851253,172.799669&zoom=19&size=%20640x640&scale=%202&maptype=hybrid&sensor=false'
Calls: ... eval -> eval -> get_map -> get_googlemap -> download.file
In addition: Warning message:
In download.file(url, destfile = destfile, quiet = !messaging, mode = "wb") :
unable to connect to 'maps.googleapis.com' on port 80.
Execution halted
I am sure this is due to the way our network is set up, probably around permissions, but is anyone able to give me any clues as to how knitr would try to access the internet to download a map so I may be able to find a way though our firewall.
Code added but it works fine except through our network.
---
title: "Drawing a map"
author: "Alasdair Noble"
output: word_document
---
To draw a map
```{r echo=TRUE, warning=FALSE , results='markup', comment="",message=FALSE }
library(ggplot2)
library(grid)
library(GGally)
library(plyr)
library(RColorBrewer)
library(ggmap)
library(ggthemes)
```
```{r echo=TRUE, warning=FALSE , results='markup', comment="", message=FALSE }
Btrcup <- get_map(location=c(lon=171.799669, lat=-42.851253),zoom=19, maptype="hybrid")
Btrcupmap <- ggmap(Btrcup)
Btrcupmap
```

Parsing issue with facebook data in fromJSON function (R) - unexpected character error

I am trying to pull Facebook feed data from various pages to compare sentiment and I am running into trouble when converting the JSON raw text into a list object in R.
require(RCurl)
require(rjson)
access_token <- "XXXXXXXXXXXXXXXX"
FacebookScrape <- function( path = "me", access_token, options){
if( !missing(options) ){
options <- sprintf( "?%s", paste( names(options), "=", unlist(options), collapse = "&", sep = "" ) )
} else {
options <- ""
}
data <- getURL( sprintf( "https://graph.facebook.com/%s%s&access_token=%s", path, options, access_token ),
ssl.verifypeer = FALSE)
fromJSON(data, unexpected.escape = "skip")
}
cb.path <- "24329337724/feed?limit=300&offset=0&__after_id=354707562896&"
cb.feed <- FacebookScrape(path = cb.path, access_token = access_token)
This code returns the following Error message:
Error in fromJSON(data, unexpected.escape = "skip") :
unexpected character: c
I'm not very familiar with JSON, but I know that the error is occurring in the fromJSON function (line 13 in the code above). This function calls C, so using debug() doesn't tell me very much. I'm also not really sure how a simple character "c" could cause an error if the JSON text is formatted properly. It's not like "c" is an escape character or anything. I also account for escape characters with the unexpected.escape = "skip" option in fromJSON.
I have determined that the error occurs when parsing this post (there is no error if I set limit=261 in cb.path, but there is if I have limit=262). Has anyone run into a similar problem? Any help would be greatly appreciated.
Session Info:
R version 2.15.3 (2013-03-01)
Platform: x86_64-w64-mingw32/x64 (64-bit)
locale:
[1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United States.1252
[3] LC_MONETARY=English_United States.1252 LC_NUMERIC=C
[5] LC_TIME=English_United States.1252
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] streamR_0.1 wordcloud_2.2 RColorBrewer_1.0-5 Rcpp_0.10.2 stringr_0.6.2
[6] plyr_1.8 tm_0.5-8.3 twitteR_1.1.6 rjson_0.2.12 ROAuth_0.9.3
[11] digest_0.6.2 ggplot2_0.9.3.1 XML_3.95-0.1 RCurl_1.95-4.1 bitops_1.0-5
loaded via a namespace (and not attached):
[1] colorspace_1.2-1 dichromat_2.0-0 grid_2.15.3 gtable_0.1.2 labeling_0.1 MASS_7.3-23
[7] munsell_0.4 proto_0.3-10 reshape2_1.2.2 scales_0.2.3 slam_0.1-27 tools_2.15.3
I had the same issue...
Base on callAPI from Rfacebook: https://github.com/pablobarbera/Rfacebook/blob/master/Rfacebook/R/utils.R
use: fromJSON(rawToChar(data)
facebook <- function(url, token){
if (class(token)=="config"){
url.data <- GET(url, config=token)
}
if (class(token)=="Token2.0"){
url.data <- GET(url, config(token=token))
}
if (class(token)=="character"){
url <- paste0(url, "&access_token=", token)
url <- gsub(" ", "%20", url)
url.data <- GET(url)
}
if (class(token)!="character" & class(token)!="config" & class(token)!="Token2.0"){
stop("Error in access token. See help for details.")
}
content <- fromJSON(rawToChar(url.data$content)) # It's working very well
if (length(content$error)>0){
stop(content$error$message)
}
return(content)
}
Call facebook function:
next.path <- "https://graph.facebook.com/29092950651/posts"
facebook( url=next.path , token)
Your access_token will active over 2hours. I use fb_oauth base on http://blog.revolutionanalytics.com/2013/11/how-to-analyze-you-facebook-friends-network-with-r.html
Best regards
Robert
I have examined your JSON
the reason is here
"message": "true\",
this caused the json in R to be parsed and become \" and a missing quote disappears .
the next line can_comment triggers the error and it starts with C