Shiny not rendering html - html

I am trying to create a link to an image on a table on a shiny app but shiny displays the HTML as raw text.The links are screenshots of over 100 sites. I want to add a link to each of the sites in a table. What am I missing? I have tired these approaches.
Example 1:
ref <- paste(l, '.png',sep = "")
link <- as.String(tags$a(href= ref,link_name))
Example 2.
links <- NULL
count <- 1
for(l in all_sites)
{
link_name <- l ref <- paste(l, '.png',sep = "")
links[count] <- HTML('<a href=',ref,'id = "logo" target="_blank" class="btn
btn-primary">',link_name,'</a>')
count <- count + 1
}
Both examples produce the same result on the table.
site.com

All i needed to do was to set escape to false
output$table<- renderDataTable({ my_df },options =list(pageLength=10), escape = FALSE)

Related

Search Menus in R Markdown?

Using a loop in R, I generated 100 random datasets and made a plot for each of these 100 datasets:
library(ggplot2)
results = list()
for (i in 1:100)
{
my_data_i = data.frame(var_1 = rnorm(100,10,10), var_2 = rnorm(100,10,10))
plot_i = ggplot(my_data_i, aes(x=var_1, y=var_2)) + geom_point() + ggtitle(paste0("graph", i))
results[[i]] = plot_i
}
list2env(setNames(results,paste0("plot",seq(results))),envir = .GlobalEnv)
What I am now trying to do, is make a Rmarkdown/flexdashboard that can be saved as an HTML file, and:
Contains all these plots
Allows the user to search for these plots (e.g. type in "plot76")
In a previous question (How to create a dropdown menu in flexdashboard?) I learned how to do something similar.
But I am still trying to figure out how I can get something like this to work. I would like there to be a single page with a search bar, and you can type in which graph you want to see.
Can someone please help me out with this? Are there any online tutorials that someone could recommend?
Thank you!
This is a close solution depending on how much you need the type feature. First in the YAML specify toc and theme as below. This will create a table of contents and allow the users to click each anchor in the list and it will bring them to the appropriate figure. Second use knit_expand() and knit() to dynamically create html blocks in your code. I did 5 plots here but it should scale to 100.
---
title: "plots"
author: "Michael"
output:
html_document:
toc: true
theme: united
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
library(ggplot2)
library(glue)
library(knitr)
```
```{r,echo=FALSE, results = 'asis'}
my_data_i = data.frame(var_1 = rnorm(100,10,10), var_2 = rnorm(100,10,10))
out = NULL
out2 = NULL
plot_i = NULL
for (i in 1:5)
{
cat('\n## Plot = ', i, '\n')
plot_i = ggplot(my_data_i, aes(x=var_1, y=var_2)) + geom_point() + ggtitle(paste0("graph", i))
out2 = c(out2, knit_expand(text = "{{print(plot_i)}}" ))
cat('\n')
}
```
`r knit(text = out2)`

Using R code to scrape data from a webpage into an Excel file

I have written a code in R which is supposed to retrieve certain information from a website and import it into an Excel file. I have used it for one website and it works, but for this particular website, it has an issue, it returns N/A values in excel, and I don't know why.
library(tidyverse)
library(rvest)
library(string)
library(rebus)
library(lubridate)
library(xlsx)
library(reader)
setwd("C:/Users/user/Desktop/Tenders")
getwd()
ran=seq(300100,300000,-1)
result = data.frame(matrix(nrow = length(ran), ncol = 1))
colnames(result) <- c("111")
for (i in ran){
url <- paste0("http://tenders.procurement.gov.ge/public/?go=", i)
download.file(url, destfile = "scrapedpage.html", quiet=TRUE)
content <- read_html("scrapedpage.html")
#111
status=content %>% html_nodes("#print_area tr:nth-child(1) td + td")%>% html_text()
status[length(status) == 0] <- NA
status=as.data.frame(status)
status=(if (nrow(status)>1){
a=as.matrix(paste(unlist(status), collapse =" "))
} else {as.matrix(status)
})
result[i, 1]=status
}
s=as.data.frame(ran)
final=result[-c(1:s[nrow(s),]),]
#Excel
write.xlsx(final,"C:/Users/user/Desktop/Tenders.xlsx", sheetName = "111")
I am using selector gadget tool, which is a chrome extension for identifying HTML parts that the code is supposed to use to gather the information (for example, in the code above it is "#print_area tr:nth-child(1) td + td", which is the first entry in the link).
Can someone help me find out what the issue might be?

RShiny integration with google sites

I would like to be able to add interactive shiny elements into a website. My HTML skills are not up to speed to make fancy websites from scratch. Google allows you to make nice slick well functioning websites fast, using sites.google.com.
I was wondering if it is possible to add R Shiny elements into a sites.google.com site.
For example, it is possible to put
library(plotly)
trace_0 <- rnorm(100, mean = 5)
trace_1 <- rnorm(100, mean = 0)
trace_2 <- rnorm(100, mean = -5)
x <- c(1:100)
data <- data.frame(x, trace_0, trace_1, trace_2)
p <- plot_ly(data, x = ~x, y = ~trace_0, name = 'trace 0', type = 'scatter', mode = 'lines') %>%
add_trace(y = ~trace_1, name = 'trace 1', mode = 'lines+markers') %>%
add_trace(y = ~trace_2, name = 'trace 2', mode = 'markers')
into https://sites.google.com/view/shinytest ?
EDIT: I read that in Shiny you can build a 'raw' HTML UI instead of a ShinyUI (shiny.rstudio.com/articles/html-ui.html). Would it be possible to extract the HTML from an existing site (e.g. the sites.google site from the example and keep all its functionality) and start using that as a base HTML UI in which Shiny elements can be added (and thususing the server part as back-end)?

Scraping html table and its href Links in R

I am trying to download a table that contains text and links. I can successfully download the table with the link text "Pass". However, instead of the text, I would like to capture the actual href URL.
library(dplyr)
library(rvest)
library(XML)
library(httr)
library(stringr)
link <- "http://www.qimedical.com/resources/method-suitability/"
qi_webpage <- read_html(link)
qi_table <- html_nodes(qi_webpage, 'table')
qi <- html_table(qi_table, header = TRUE)[[1]]
qi <- qi[,-1]
Above gives a nice dataframe. However the last column only contains the text "Pass" when I would like to have the link associated with it. I have tried to use the following to add the links, but they do not correspond to the correct
row:
qi_get <- GET("http://www.qimedical.com/resources/method-suitability/")
qi_html <- htmlParse(content(qi_get, as="text"))
qi.urls <- xpathSApply(qi_html, "//*/td[7]/a", xmlAttrs, "href")
qi.urls <- qi.urls[1,]
qi <- mutate(qi, "MSTLink" = (ifelse(qi$`Study Protocol(click to download certification)` == "Pass", (t(qi.urls)), "")))
I know little about html, css, etc, so I am not sure what I am missing to accomplish this properly.
Thanks!!
You're looking for a elements inside of table cells, td. Then you want the value of the href attribute. So here's one way, which will return a vector with all the URLs for the PDF downloads:
qi_webpage %>%
html_nodes(xpath = "//td/a") %>%
html_attr("href")

Remove lines empty element (NULL) with readHTMLTable()

I try to remove rows in HTML table with almost one empty element, represents by (NULL) using readHTMLTable() function in XLM package without success. In my code:
require(httr)
require(XML)
Function for read HTML table
readFE<- function (x, URL = ""){
FILE <- GET(url=URL)
tables <- getNodeSet(htmlParse(FILE), "//table")
FE_tab <- readHTMLTable(tables[[1]],
header = c("empresa","desc_projeto","desc_regiao",
"cadastrador_por","cod_talhao","descricao",
"formiga_area","qtd_destruido","latitude",
"longitude","data_cadastro"),
colClasses = c("character","character","character",
"character","character","character",
"character","character","character",
"character","character"),
trim = TRUE, stringsAsFactors = FALSE
)
x<-NULL
results <- x
x<-FE_tab[-(1),]
results <- x
results
}
--
Exemple
tableFE<-readFE(URL="https://www.dropbox.com/s/mb316ghr4irxipr/TALHOES_AGENTES.htm?dl=1")
tableFE
Someone could help me?
Thanks,
Alexandre