Incorporating Sliders into a Leaflet Map - html

I was following this tutorial here (https://rstudio.github.io/crosstalk/) and tried to run the code:
library(crosstalk)
library(leaflet)
library(DT)
# Wrap data frame in SharedData
sd <- SharedData$new(quakes[sample(nrow(quakes), 100),])
# Create a filter input
filter_slider("mag", "Magnitude", sd, column=~mag, step=0.1, width=250)
# Use SharedData like a dataframe with Crosstalk-enabled widgets
map = bscols(
leaflet(sd) %>% addTiles() %>% addMarkers(),
datatable(sd, extensions="Scroller", style="bootstrap", class="compact", width="100%",
options=list(deferRender=TRUE, scrollY=300, scroller=TRUE))
)
The map seems to render, but the "interactive slider" does not appear:
Also, I can not seem to save this map:
library(htmlwidgets)
saveWidget(map, "map.html", selfcontained = F, libdir = "lib")
Error in .getNamespace(pkg) :
invalid type/length (symbol/0) in vector allocation
I heard that the slider might require installing some further add-ons, but I have not been able to find out how to do this.
Does anyone know what I can do to resolve this problem?
Thank you!
> sessionInfo()
R version 4.1.3 (2022-03-10)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 22000)
Matrix products: default
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] DT_0.23 leaflet_2.1.1 crosstalk_1.2.0
loaded via a namespace (and not attached):
[1] jquerylib_0.1.4 pillar_1.7.0 compiler_4.1.3 bslib_0.3.1 tools_4.1.3 digest_0.6.29 lubridate_1.8.0
[8] jsonlite_1.8.0 lifecycle_1.0.1 tibble_3.1.6 pkgconfig_2.0.3 rlang_1.0.2 cli_3.3.0 DBI_1.1.2
[15] yaml_2.3.5 xfun_0.30 fastmap_1.1.0 dplyr_1.0.9 stringr_1.4.0 generics_0.1.3 vctrs_0.4.1
[22] htmlwidgets_1.5.4 sass_0.4.1 hms_1.1.1 tidyselect_1.1.2 glue_1.6.2 R6_2.5.1 fansi_1.0.3
[29] purrr_0.3.4 tidyr_1.2.0 readr_2.1.2 tzdb_0.3.0 magrittr_2.0.2 ellipsis_0.3.2 htmltools_0.5.2
[36] assertthat_0.2.1 utf8_1.2.2 tinytex_0.40 stringi_1.7.6 lazyeval_0.2.2 crayon_1.5.1

Related

Shiny not displaying table with HTML/JSON error message

I'm trying to put together a simply shiny app that will send a search request, return a data frame and display it in the UI. When I run the app, everything appears to be functioning correctly at first but when I run a query I get an html/json error.
Here is the code:
ui <- fluidPage(
# Application title
titlePanel("My App"),
sidebarLayout(
sidebarPanel(
textInput('dataset_name',
'Dataset:',
placeholder = 'Name')
,
br(),
actionButton("button", "Search"),
),
mainPanel(
tableOutput('userTable')
),
position = c("left"),
fluid=FALSE
)
)
server <- function(input, output) {
ut.df <- eventReactive(input$button, {
ds <- dataSearch(input$datset_name)
return(ds)
})
output$userTable <- renderTable({ut.df()})
}
dataSearch is the function I've created to send the input$dataset_name value to an api call and return a dataframe of the results. I've tested the function and it parses the response JSON and returns the dataframe without issue.
When I run the shiny app the page loads with no problem but when I submit a query, instead of rendering the data frame as a table I get:
Warning: Error in : lexical error: invalid char in json text.
<!DOCTYPE HTML PUBLIC "-//W3C//
(right here) ------^
Can anyone explain why the table is not being rendered and why shiny seems to think the html code is a json file?
Session info:
R version 4.1.2 (2021-11-01)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 19042)
Matrix products: default
locale:
[1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United States.1252
[3] LC_MONETARY=English_United States.1252 LC_NUMERIC=C
[5] LC_TIME=English_United States.1252
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] DT_0.20 jsonlite_1.7.2 httr_1.4.2 shiny_1.7.1
loaded via a namespace (and not attached):
[1] Rcpp_1.0.7 jquerylib_0.1.4 bslib_0.3.1 compiler_4.1.2
[5] pillar_1.6.4 later_1.3.0 neo4r_0.1.1 tools_4.1.2
[9] digest_0.6.28 lattice_0.20-45 lifecycle_1.0.1 tibble_3.1.6
[13] png_0.1-7 pkgconfig_2.0.3 rlang_0.4.12 Matrix_1.3-4
[17] cli_3.1.0 rstudioapi_0.13 crosstalk_1.2.0 yaml_2.2.1
[21] curl_4.3.2 fastmap_1.1.0 withr_2.4.2 dplyr_1.0.7
[25] htmlwidgets_1.5.4 sass_0.4.0 rappdirs_0.3.3 generics_0.1.1
[29] vctrs_0.3.8 rprojroot_2.0.2 grid_4.1.2 attempt_0.3.1
[33] tidyselect_1.1.1 fontawesome_0.2.2 here_1.0.1 reticulate_1.22
[37] glue_1.5.0 data.table_1.14.2 R6_2.5.1 fansi_0.5.0
[41] purrr_0.3.4 tidyr_1.1.4 magrittr_2.0.1 promises_1.2.0.1
[45] ellipsis_0.3.2 htmltools_0.5.2 mime_0.12 xtable_1.8-4
[49] httpuv_1.6.3 utf8_1.2.2 cachem_1.0.6 crayon_1.4.2
This error means that the document you're trying to read with {jsonlite} is not a JSON file, but an HTML file.
For example, you can reproduce this error with:
> jsonlite::read_json("https://google.com")
Error in parse_con(txt, bigint_as_char) :
lexical error: invalid char in json text.
<!DOCTYPE html><html lang="fr"
(right here) ------^
So you need to make sure that the JSON you're reading is correct.
Colin

Trying to find hyperlinks by scraping

So I am fairly new to the topic of webscraping. I am trying to find all the hyperlinks that the html code of the following page contains:
https://www.exito.com/mercado/lacteos-huevos-y-refrigerados/leches
So this is what I tried:
url <- "https://www.exito.com/mercado/lacteos-huevos-y-refrigerados/leches"
webpage <- read_html(url)
html_attr(html_nodes(webpage, "a"), "href")
The result only contains like 6 links but just by viewing the page you can see that there are a lot more of hyperlinks.
For example the code behind the first image has something like: <a href="/leche-entera-sixpack-en-bolsa-x-11-litros-cu-807650/p" class="vtex-product-summary-2-x-clearLink h-100 flex flex-column"> ...
What am I doing wrong?
You won't be able to get the a tags you're after because that part of the website is not visible to html/xml parsers. This is because it's a dynamic part of the website that changes if you choose another part of the website; the only 'static' part of the website is the top header, which is why you only got 6 a tags: the six a tags from the header.
For this, we need to mimic the behavior of a browser (firefox, chrome, etc...), go into the website (note that we're not entering the website as an html/xml parser but as a 'user' through a browser) and read the html/xml source code from there.
For this we'll need the R package RSelenium. Make sure you install it correctly together with docker, as none of the code below can work without it.
After you install RSelenium and docker, run docker run -d -p 4445:4444 selenium/standalone-firefox:2.53.1 from your terminal (if on Linux, you can run this the terminal; if on Windows you'll have to download a docker terminal, run it there). After that you're all set to reproduce the code below.
Why you're approach didn't work
We need to access the 5th div tag from the image below:
As you can see, this 5th div tag has three dots (...) inside, denoting that there's code inside: this is precisely where all of the bottom part of the website is (including the a tags that you're after). If we tried to access this 5th tag using rvest or xml2, we won't find anything:
library(xml2)
library(dplyr)
#>
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#>
#> filter, lag
#> The following objects are masked from 'package:base':
#>
#> intersect, setdiff, setequal, union
lnk <- "https://www.exito.com/mercado/lacteos-huevos-y-refrigerados/leches?page=2"
# Note how the 5th div element is empty and it should contain the lower
# part of the website
lnk %>%
read_html() %>%
xml_find_all("//div[#class='flex flex-grow-1 w-100 flex-column']") %>%
xml_children()
#> {xml_nodeset (6)}
#> [1] <div class=""></div>\n
#> [2] <div class=""></div>\n
#> [3] <div class=""></div>\n
#> [4] <div class=""></div>\n
#> [5] <div class=""></div>\n
#> [6] <div class=""></div>
Note how the 5th div tag doesn't have any code inside. A simple html/xml parser won't catch it.
How it can work
We need to use RSelenium. After you've installed everything correctly, we need to setup a 'remote driver', open it and navigate to the website. All of these steps are just to make sure that we're coming into the website as a 'normal' user from a browser. This will make sure that we can access the rendered code that we actually see when we enter the website. Below are the detailed steps from entering the website and constructing the links.
# Make sure you install docker correctly: https://docs.ropensci.org/RSelenium/articles/docker.html
library(RSelenium)
# After installing docker and before running the code, make sure you run
# the rselenium docker image: docker run -d -p 4445:4444 selenium/standalone-firefox:2.53.1
# Now, set up your remote driver
remDr <- remoteDriver(
remoteServerAddr = "localhost",
port = 4445L,
browserName = "firefox"
)
# Initiate the driver
remDr$open(silent = TRUE)
# Navigate to the exito.com website
remDr$navigate(lnk)
prod_links <-
# Get the html source code
remDr$getPageSource()[[1]] %>%
read_html() %>%
# Find all a tags which have a certain class
# I searched for this tag manually on the website code and saw that all products
# had an a tag that shared the same class
xml_find_all("//a[#class='vtex-product-summary-2-x-clearLink h-100 flex flex-column']") %>%
# Extract the href attribute
xml_attr("href") %>%
paste0("https://www.exito.com", .)
prod_links
#> [1] "https://www.exito.com/leche-semidescremada-deslactosada-en-bolsa-x-900-ml-145711/p"
#> [2] "https://www.exito.com/leche-entera-en-bolsa-x-900-ml-145704/p"
#> [3] "https://www.exito.com/leche-entera-sixpack-x-1300-ml-cu-987433/p"
#> [4] "https://www.exito.com/leche-deslactosada-en-caja-x-1-litro-878473/p"
#> [5] "https://www.exito.com/leche-polvo-deslactos-semidesc-764522/p"
#> [6] "https://www.exito.com/leche-slight-sixpack-en-caja-x-1050-ml-cu-663528/p"
#> [7] "https://www.exito.com/leche-semidescremada-sixpack-en-caja-x-1050-ml-cu-663526/p"
#> [8] "https://www.exito.com/leche-descremada-sixpack-x-1300-ml-cu-563046/p"
#> [9] "https://www.exito.com/of-leche-deslact-pag-5-lleve-6-439057/p"
#> [10] "https://www.exito.com/sixpack-de-leche-descremada-x-1100-ml-cu-414454/p"
#> [11] "https://www.exito.com/leche-en-polvo-klim-fortificada-360g-239085/p"
#> [12] "https://www.exito.com/leche-deslactosada-descremada-en-caja-x-1-litro-238291/p"
#> [13] "https://www.exito.com/leche-deslactosada-en-caja-x-1-litro-157334/p"
#> [14] "https://www.exito.com/leche-entera-larga-vida-en-caja-x-1-litro-157332/p"
#> [15] "https://www.exito.com/leche-en-polvo-klim-fortificada-780g-138121/p"
#> [16] "https://www.exito.com/leche-entera-en-bolsa-x-1-litro-125079/p"
#> [17] "https://www.exito.com/leche-entera-en-bolsa-sixpack-x-11-litros-cu-59651/p"
#> [18] "https://www.exito.com/leche-deslactosada-descremada-sixpack-x-11-litros-cu-22049/p"
#> [19] "https://www.exito.com/leche-entera-en-polvo-instantanea-x-760-gr-835923/p"
#> [20] "https://www.exito.com/of-alpin-cja-cho-pag9-llev12/p"
Hope this answers your questions
The data, including the urls, are returned dynamically from a GraphQL query you can observe in the network tab when clicking Mostrar más on the page. This is why the content is not present in your initial query - it has not yet been requested.
XHR for the product info
The relevant XHR in the network tab of dev tools:
The actual query params of the url query string:
You can do away with most of the request info. What you do need is the extensions param. More specifically, you need to provide the sha256Hash and the base64 encoded string value associated with the variables key in the persistedQuery.
The SHA256 Hash
The appropriate hash can be extracted from at least one of the js files which essentially governs the set up. An example file you can use is:
https://exitocol.vtexassets.com/_v/public/assets/v1/published/bundle/public/react/asset.min.js?v=1&files=vtex.store-resources#0.38.0,OrderFormContext,Mutations,Queries,PWAContext&files=exitocol.store-components#0.0.2,common,11,3,SearchBar&files=vtex.responsive-values#0.2.0,common,useResponsiveValues&files=vtex.slider#0.7.3,common,0,Dots,Slide,Slider,SliderContainer&files=exito.components#4.0.7,common,0,1,3,4&workspace=master.
The query hash can be regex'd from the response text of an xhr request to this uri. The regex is explained here and the first match is sufficient:
To apply in R, with stringr, you will need some extra escapes in e.g. \s becomes \\s.
The Base64 encoded product query
The base64 encoded string you can generate yourself with the appropriate library e.g. it seems there is a base64encode R function in caTools package.
The encoded string looks like (depending on page/result batch):
eyJ3aXRoRmFjZXRzIjpmYWxzZSwiaGlkZVVuYXZhaWxhYmxlSXRlbXMiOmZhbHNlLCJza3VzRmlsdGVyIjoiQUxMX0FWQUlMQUJMRSIsInF1ZXJ5IjoiMTQ4IiwibWFwIjoicHJvZHVjdENsdXN0ZXJJZHMiLCJvcmRlckJ5IjoiT3JkZXJCeVRvcFNhbGVERVNDIiwiZnJvbSI6MjAsInRvIjozOX0=
Decoded:
{"withFacets":false,"hideUnavailableItems":false,"skusFilter":"ALL_AVAILABLE","query":"148","map":"productClusterIds","orderBy":"OrderByTopSaleDESC","from":20,"to":39}
The from and to params are the offsets for the results batches of products which come in batches of twenty. So, you can write functions which return the appropriate sha256 hash and send a subsequent request for product info where you base64 encode, with the appropriate library, the string above and alter the from and to params as required. Potentially others as well (have a play!).
The xhr response:
The response is json so you might need a json library (e.g. jsonlite) to handle the result (UPDATE: Seems you don't with R and httr). You can extract the links from a list of dictionaries nested within result['data']['products'], as per Python example, where result is the json object retrieved from the xhr with from and to params.
Examples:
Examples using R and Python are shown below (N.B. I am less familiar with R). The above has been kept fairly language agnostic.
Bear in mind, whilst I am extracting the urls, the json returned has a lot more info including product title, price, image info etc.
Example output:
TODO:
Add in error handling
Use Session objects to benefit from re-use of underlying tcp connection especially if making multiple requests to get all products
Add in functionality to return total product number and loop structure to retrieve all (Python example might benefit from decorator)
R (a quick first go):
library(purrr)
library(stringr)
library(caTools)
library(httr)
get_links <- function(sha, start, end){
string = paste0('{"withFacets":false,"hideUnavailableItems":false,"skusFilter":"ALL_AVAILABLE","query":"148","map":"productClusterIds","orderBy":"OrderByTopSaleDESC","from":' , start , ',"to":' , end , '}')
base64encoded <- caTools::base64encode(string)
params = list(
'extensions' = paste0('{"persistedQuery":{"version":1,"sha256Hash":"' , sha , '","sender":"vtex.store-resources#0.x","provider":"vtex.search-graphql#0.x"},"variables":"' , base64encoded , '"}')
)
product_info <- content(httr::GET(url = 'https://www.exito.com/_v/segment/graphql/v1', query = params))$data$products
links <- map(product_info, ~{
.x %>% .$link
})
return(links)
}
start <- '0'
end <- '19'
sha <- httr::GET('https://exitocol.vtexassets.com/_v/public/assets/v1/published/bundle/public/react/asset.min.js?v=1&files=vtex.store-resources#0.38.0,OrderFormContext,Mutations,Queries,PWAContext&files=exitocol.store-components#0.0.2,common,11,3,SearchBar&files=vtex.responsive-values#0.2.0,common,useResponsiveValues&files=vtex.slider#0.7.3,common,0,Dots,Slide,Slider,SliderContainer&files=exito.components#4.0.7,common,0,1,3,4&workspace=master') %>%
content(., as = "text")%>% str_match(.,'query\\s+productSearch.*?hash:\\s+"(.*?)"')%>% .[[2]]
links <- get_links(sha, start, end)
print(links)
Py:
import requests, base64, re, json
def get_sha():
r = requests.get('https://exitocol.vtexassets.com/_v/public/assets/v1/published/bundle/public/react/asset.min.js?v=1&files=vtex.store-resources#0.38.0,OrderFormContext,Mutations,Queries,PWAContext&files=exitocol.store-components#0.0.2,common,11,3,SearchBar&files=vtex.responsive-values#0.2.0,common,useResponsiveValues&files=vtex.slider#0.7.3,common,0,Dots,Slide,Slider,SliderContainer&files=exito.components#4.0.7,common,0,1,3,4&workspace=master')
p = re.compile(r'query\s+productSearch.*?hash:\s+"(.*?)"') #https://regex101.com/r/VdC27H/5
sha = p.findall(r.text)[0]
return sha
def get_json(sha, start, end):
#these 'from' and 'to' values correspond with page # as pages cover batches of 20 e.g. start 20 end 39
string = '{"withFacets":false,"hideUnavailableItems":false,"skusFilter":"ALL_AVAILABLE","query":"148","map":"productClusterIds","orderBy":"OrderByTopSaleDESC","from":' + start + ',"to":' + end + '}'
base64encoded = base64.b64encode(string.encode('utf-8')).decode()
params = (('extensions', '{"persistedQuery":{"sha256Hash":"' + sha + '","sender":"vtex.store-resources#0.x","provider":"vtex.search-graphql#0.x"},"variables":"' + base64encoded + '"}'),)
r = requests.get('https://www.exito.com/_v/segment/graphql/v1',params=params)
return r.json()
def get_links(sha, start, end):
result = get_json(sha, start, end)
links = [i['link'] for i in result['data']['products']]
return links
sha = get_sha()
links = get_links(sha, '0', '19')
#print(len(links))
print(links)

Shiny apps in rmarkdown websites and HTML dependencies

I've recently created an rmarkdown website. I now want to have a page that highlights basic Shiny functionality. This is possible using the runtime: shiny option for normal markdown documents. However, when I use this:
---
title: "Untitled"
runtime: shiny
output: html_document
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
```
```{r}
inputPanel(
sliderInput("obs", "observations:", min = 10, max = 500, value = 100)
)
renderPlot({hist(rnorm(input$obs), col = 'darkgray', border = 'white')})
```
and try to build the site, I get the following error:
I've made sure all of my packages are up to date.
I get the feeling I may be fundamentally misunderstanding how websites (and Shiny) work, but I can't find an explicit answer to my question in the authoring Shiny document, embedded Shiny page or the rmarkdown website guide.
Is this a case of Shiny apps not being deployable on websites in this fashion at all? Or am I just being dense?
EDIT: output from sessionifo():
> sessionInfo()
R version 3.3.2 (2016-10-31)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Antergos Linux
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
[3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=en_US.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
loaded via a namespace (and not attached):
[1] backports_1.0.5 magrittr_1.5 rprojroot_1.2 htmltools_0.3.5
[5] tools_3.3.2 yaml_2.1.14 Rcpp_0.12.9 stringi_1.1.2
[9] rmarkdown_1.3 knitr_1.15.1 stringr_1.1.0 digest_0.6.12
[13] evaluate_0.10
When you use Shiny controls in r-markdown, you do not embed the application, you embed pieces of the application. For example if you wanted to do that shiny app in r-markdown you would use this code:
---
title: "Untitled"
runtime: shiny
output: html_document
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
```
```{r}
inputPanel(
sliderInput("obs", "observations:", min = 10, max = 500, value = 100)
)
renderPlot({hist(rnorm(input$obs), col = 'darkgray', border = 'white')})
```
Which yields this when compiled (and the controls work and the plot updates):

trying to Upload .rmd file to wordpress

I'm having trouble uploading .rmd file to wordpress. I'm not exactly sure what's going on but the error suggests I don't have privileges to remotely publish to wordpress even though from what I understand Wordpress allows remote publishing even for free accounts. I've searched all the wordpress R queries on stack overflow and nothing seems to work. Here's my work flow:
devtools:::install_github("duncantl/RWordPress", force=T)
library(RWordPress)
# Set login parameters (replace admin,password and blog_url!)
options(WordPressLogin = c(admin = 'password'), WordPressURL = 'blog_url/xmlrpc.php')
library(markdown)
library(knitr)
options(markdown.HTML.options = c(markdownHTMLOptions(default = T),"toc"))
# Upload plots: set knitr options
opts_knit$set(upload.fun = function(file){library(RWordPress);uploadFile(file)$url;})
postThumbnail <- RWordPress::uploadFile("File.rmd",overwrite = TRUE)
That produces the following error:
Error: faultCode: 401 faultString: You do not have permission to upload files.
I also tried the following:
knit2wp('fake.rmd', title = 'TITLE', publish = FALSE)
And that produces the same error.
Here's my session info:
sessionInfo()
R version 3.3.0 (2016-05-03)
Platform: x86_64-apple-darwin13.4.0 (64-bit)
Running under: OS X 10.11.5 (El Capitan)
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets
[6] methods base
other attached packages:
[1] ggplot2_2.1.0 rmarkdown_1.0 knitr_1.13
[4] markdown_0.7.7 RWordPress_0.2-3
loaded via a namespace (and not attached):
[1] Rcpp_0.12.5 formatR_1.4
[3] plyr_1.8.3 bitops_1.0-6
[5] base64enc_0.1-3 tools_3.3.0
[7] digest_0.6.10 jsonlite_1.0
[9] evaluate_0.9 tibble_1.1
[11] gtable_0.2.0 viridisLite_0.1.3
[13] lattice_0.20-33 png_0.1-7
[15] DBI_0.4-1 mapproj_1.2-4
[17] proto_0.3-10 gridExtra_2.2.1
[19] dplyr_0.5.0 httr_1.2.1
[21] stringr_1.0.0 caTools_1.17.1
[23] RgoogleMaps_1.2.0.7 htmlwidgets_0.7
[25] maps_3.1.0 grid_3.3.0
[27] R6_2.1.2 jpeg_0.1-8
[29] plotly_4.1.0 XML_3.98-1.4
[31] RSelenium_1.4.2 RJSONIO_1.3-0
[33] sp_1.2-3 ggmap_2.6.1
[35] tidyr_0.5.1 reshape2_1.4.1
[37] magrittr_1.5 XMLRPC_0.3-0
[39] scales_0.4.0 htmltools_0.3.5
[41] assertthat_0.1 formattable_0.2
[43] colorspace_1.2-6 geosphere_1.5-1
[45] labeling_0.3 stringi_1.0-1
[47] RCurl_1.95-4.8 lazyeval_0.2.0
[49] munsell_0.4.3 rjson_0.2.15
I'd also like to note, I checked the password and username and they're both correct (if I enter incorrect information I get a different error indicating that). I've also gotten a similar error trying user written functions:
Error: faultCode: 401 faultString: Sorry, you are not allowed to publish posts on this site.
By the way, when I run getUsersBlogs() I get:
$isAdmin
[1] TRUE
$isPrimary
[1] TRUE
$url
[1] "https://blogname.wordpress.com/"
$blogid
[1] "115210981"
$blogName
[1] "Site Title"
$xmlrpc
[1] "https://blogname.wordpress.com/xmlrpc.php"
As implied by #Lloyd Christmas, the problem is with your specification of options. If you change "WordPressURL" to "WordpressURL", you'll probably be fine.

How to load online JSON data to shiny app with jsonlite?

I am trying to make shiny app that takes data from this api: https://www.riigiteenused.ee/api/et/all. I need to use jsonlite::fromJSON, because it has good flatten function. When I use the following code (minimal example, in real life I do more stuff with data):
library(jsonlite)
data=fromJSON("https://www.riigiteenused.ee/api/et/all")
server <- function(input, output) {
output$tekst <- renderText({
nchar(data)
})
}
ui <- fluidPage(
sidebarLayout(
sidebarPanel(),
mainPanel(textOutput("tekst"))
))
shinyApp(ui = ui, server = server)
I get following error message:
Error in open.connection(con, "rb") :
Peer certificate cannot be authenticated with given CA certificates
I tried the following (go around ssl verify peer):
library(RCurl)
raw <- getURL("https://www.riigiteenused.ee/api/et/all",
.opts = list(ssl.verifypeer = FALSE), crlf = TRUE)
data=fromJSON(raw)
It reads in raw data, but messes up JSON (validate(raw) shows lexical error: invalid character \n inside string, which is causing following error):
Error: lexical error: invalid character inside string.
ressile: laevaregister#vta.ee. Avaldusele soovitatavalt lis
(right here) ------^
Also one idea I tried was:
data=fromJSON(readLines("https://www.riigiteenused.ee/api/et/all"))
It works fine in my computer, but when I upload it to shinyapps.io app doesn't work and from logs I see error:
Error in file(con, "r") : https:// URLs are not supported
Could somebody give me a clue, if there is a way to load JSON data from https toshiny app using jsonlite fromJSON function?
My session info is following:
R version 3.2.2 (2015-08-14)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 8 x64 (build 9200)
locale:
[1] LC_COLLATE=Estonian_Estonia.1257 LC_CTYPE=Estonian_Estonia.1257
[3] LC_MONETARY=Estonian_Estonia.1257 LC_NUMERIC=C
[5] LC_TIME=Estonian_Estonia.1257
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] jsonlite_0.9.19 httr_1.0.0 RCurl_1.95-4.7 bitops_1.0-6 shiny_0.12.2
loaded via a namespace (and not attached):
[1] Rcpp_0.12.2 digest_0.6.8 mime_0.4 R6_2.1.1
[5] xtable_1.7-4 magrittr_1.5 stringi_1.0-1 curl_0.9.4
[9] tools_3.2.2 stringr_1.0.0 httpuv_1.3.3 rsconnect_0.4.1.4
[13] htmltools_0.2.6
don't skip ssl, try
fromJSON(content(GET("https://www.riigiteenused.ee/api/et/all"), "text"))
I tried this solution that worked fine in my computer and in shiny server:
library(rjson)
library(jsonlite)
fromJSON(url, flatten=T)