This might sounds quite easy to many of experts, but after spending hours I have not came up with a right solution yet, I might have overlooked something which is easy to configure.
My question is how to make this shiny app to talk to Cloud Relational Database, for instance Google MySQL services, after deploying onto shinyapps.io
I have successfully launched this shiny app locally on my Windows 7 64-bit machine, cause I have specified User DSN as google_sql with correct Driver MySQL ODBC 5.3 ANSI Driver, ip, password, etc, so in code line odbcConnect I can simply provide dsn, username and password to open a connection. However when I deploy it to shinyapps.io, it failed with my expectation. My guess is my DSN google_sql is not recognized by shinyapps.io, so in order to make it working, what should I do? Should I change some code? Or configure on shinyapps.io
PS: It's not about how to install RMySQL, someone is posting ans to similar question here (unless they think RMySQL can do something which RODBC can not do)
connecting shiny app to mysql database on server
server.R
library(shiny)
# library(RODBC)
library(RMySQL)
# ch <- odbcConnect(dsn = "google_sql", uid = "abc", pwd = "def")
ch <- dbConnect(MySQL(),user='abc',password='def',
host = 'cloud_rdb_ip_address', dbname = 'my_db')
shinyServer(function(input, output) {
statement <- reactive({
if(input$attribute == 'All'){
sprintf("SELECT * FROM test_db WHERE country = '%s' AND item = '%s' AND year = '%s' AND data_source = '%s'",
input$country,input$item,input$year,input$data_source)
}else{
sprintf("SELECT * FROM test_db WHERE country = '%s' AND item = '%s' AND attribute = '%s' AND year = '%s' AND data_source = '%s'",
input$country,input$item,input$attribute,input$year,input$data_source)
}
})
output$result <- renderTable(dbFetch(dbSendQuery(ch, statement=statement()),n=1000))
})
ui.R
library(shiny)
shinyUI(fluidPage(
# Application title
headerPanel("Sales Database User Interface"),
fluidRow(
column(4,
selectInput('country','Country',c('United States','European Union','China'),selected = NULL),
selectInput('item','Item',c('Shoes','Hat','Pants','T-Shirt'),selected = NULL),
selectInput('attribute','Attribute',c('All','Sales','Procurement'),selected = NULL)
),
column(4,
selectInput('year','Calendar Year',c('2014/2015','2015/2016'),selected = NULL),
selectInput('data_source','Data Source',c('Automation','Manual'),selected = NULL)
)
),
submitButton(text = "Submit", icon = NULL),
# Sidebar with a slider input for the number of bins
# Show a plot of the generated distribution
mainPanel(
tableOutput("result")
)
))
I think it worth posting my shiny showLogs() error log for expert to enlighten me pls,
2015-05-04T06:32:16.143534+00:00 shinyapps[40315]: R version: 3.1.2
2015-05-04T06:32:16.392183+00:00 shinyapps[40315]:
2015-05-04T06:32:16.143596+00:00 shinyapps[40315]: shiny version: 0.11.1
2015-05-04T06:32:16.392185+00:00 shinyapps[40315]: Listening on http://0.0.0.0:51336
2015-05-04T06:32:16.143598+00:00 shinyapps[40315]: rmarkdown version: NA
2015-05-04T06:32:16.143607+00:00 shinyapps[40315]: knitr version: NA
2015-05-04T06:32:16.143608+00:00 shinyapps[40315]: jsonlite version: NA
2015-05-04T06:32:16.143616+00:00 shinyapps[40315]: RJSONIO version: 1.3.0
2015-05-04T06:32:16.143660+00:00 shinyapps[40315]: htmltools version: 0.2.6
2015-05-04T06:32:16.386758+00:00 shinyapps[40315]: Using RJSONIO for JSON processing
2015-05-04T06:32:16.386763+00:00 shinyapps[40315]: Starting R with process ID: '27'
2015-05-04T06:32:19.572072+00:00 shinyapps[40315]: Loading required package: DBI
2015-05-04T06:32:19.831544+00:00 shinyapps[40315]: Error in .local(drv, ...) :
2015-05-04T06:32:19.831547+00:00 shinyapps[40315]: Failed to connect to database: Error: Lost connection to MySQL server at 'reading initial communication packet', system error: 0
2015-05-04T06:32:19.831549+00:00 shinyapps[40315]:
PS: I think I need to white-list shinyapps.io ip address to my Google Could, to enable deployment on shinyapps.io.
The list given in pidig89's answer is the right IP list, but rather than trusting some random list found on a SO answer you can find the most up-to-date list on their support site: https://support.rstudio.com/hc/en-us/articles/217592507-How-do-I-give-my-application-on-shinyapps-io-access-to-my-remote-database-
(They "formally" announced this as the recommended way of IP filtering on their mailing list on a post in July 28, 2016)
I actually managed to come up with the answer to this question ourselves but wanted to share the answer since it might be relevant to others as well.
Here are the IP addresses you need to whitelist.
54.204.29.251
54.204.34.9
54.204.36.75
54.204.37.78
If you think it's a shinyapps issue with whitelisting your IP (I'm not saying that's the actual issue, but in case it is) then I would post to the shinyapps google group, as the shinyapps developers monitor it and answer frequently.
https://groups.google.com/forum/#!forum/shinyapps-users
Related
I have been trying to connect to my SQL that I am trying to create. I recently downloaded MySQL, the workbench, the connector ODBC, and the ODBC Manager, but I can't find the solution to solve the error for the connection.
Do I need to download anything else? I can't find a solution on internet or youtube for Mac.
packages_required = c("quantmod", "RSQLite", "data.table", "lubridate", "pbapply", "DBI")
install.packages(packages_required)
library("quantmod")
library("RSQLite")
library("data.table")
library("lubridate")
library("pbapply")
library("odbc")
PASS <- new.env()
assign("pwd","My Password",envir=PASS)
library("DBI")
con <- dbConnect(odbc(), Driver = "/usr/local/mysql-connector-odbc-8.0.28-macos11-x86-64bit/lib/libmyodbc8w.so",
Server = "localhost", Database = "data", UID = "root", PWD = PASS$pwd,
Port = 3306)
-----------------------------------------------------------------------------------------
> con <- dbConnect(odbc(), Driver = "/usr/local/mysql-connector-odbc-8.0.28-macos11-x86-64bit/lib/libmyodbc8w.so",
+ Server = "localhost", Database = "data", UID = "root", PWD = PASS$pwd,
+ Port = 3306)
Error: nanodbc/nanodbc.cpp:1021: 00000: [
>
Thank you
Presuming you're on Windows, try creating an ODBC connection using the most recent driver. The ODBC data sources tool should already be installed, you just need to open it and create a new one.
Press the windows key (or click the search spyglass) and type in "ODBC." The "ODBC Data Sources (64-bit)" tool should come up.
How to Create an ODBC Connection in Windows
Open the "ODBC Data Sources (64-bit)" application
Click "Add"
Choose"MySQL ODBC 8.0 Unicode Driver" (or whatever the newest version you
have is). If you don't have it, you can download it here:
https://dev.mysql.com/downloads/connector/odbc/
Enter the following information:
Data source name (the example code below uses "my_odbc_connection"), TCP/IP Server, Port, User and Password
Click "Details" to expand the box.
In the "Connection" tab, you may need to check the "Enable Cleartext
Authentication" box. Could depend on your system configuratoin.
Click "Test" to test the connection. If everything went right you
should get "Connection Successful" message. If you aren't able to get a
successful connection, make sure that you have access and that your
connection information doesn't have any typos.
After making a successful connection, perform these 2 additional steps (the
drop-downs won't populate until you connect successfully):
Click the "Database" drop down to choose the default database that you'll
be writing data to. If you will be writing to more than 1 database
then you may need to create a separate connection for each database
that you'll be writing to, specifying the default database
differently for each one.
Click the "Character Set" drop down and choose utf8.
You should now able to use the "DBI" and "ODBC" packages to read, write, etc. any data directly from R. Specific settings listed above may or may not apply depending on situation.
See the example code below.
Further reading: https://www.r-bloggers.com/setting-up-an-odbc-connection-with-ms-sql-server-on-windows/
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#~~ Load or install packages
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Load or install librarian
if(require(librarian) == FALSE){
install.packages("librarian")
if(require(librarian)== FALSE){stop("Unable to install and load librarian")}
}
# Load multiple packages using the librarian package
librarian::shelf(tidyverse, readxl, DBI, lubridate, ODBC, quiet = TRUE)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#~~ Read
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Connect to a pre-defined ODBC connection named "my_odbc_connection"
conn <- DBI::dbConnect(odbc::odbc(), "my_odbc_connection")
# Create a query
query <- "
SELECT *
FROM YOUR_SCHEMA.YOUR_TABLE;
"
# Run the query
df_data <- DBI::dbGetQuery(conn,query)
# Close the open connection
try(DBI::dbDisconnect(conn), silent = TRUE)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#~~ Write
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Define the connection and database you'll be writing to
conn <- DBI::dbConnect(odbc::odbc(), "my_odbc_connection", db ="YOUR_DEFAULT_DB")
# Define variable types for your data frame. As a general rule, it's a good idea to define your data types rather than let the package guess.
field_types <- c("INTEGER","VARCHAR(20)","DATE","DATETIME","VARCHAR(20)","VARCHAR(50)","VARCHAR(20)")
names(field_types) <- names(df_data)
# Record start time
start_time <- Sys.time()
# Example write statement
DBI::dbWriteTable(conn,"YOUR_TABLE_NAME",YOUR_DATA_FRAME,overwrite=TRUE,field.types=field_types, row.names = FALSE)
# Print time difference
print("Writing complete.")
print(Sys.time() - start_time)
# Close the open connection
try(DBI::dbDisconnect(conn), silent = TRUE)
So I am trying to use my (continously updating) database on MySQL with some visualizations which I want to put into my Streamlit app. In other words, I want to use the data from MySQL database in my Streamlit application.
For this purpose I consulted the official streamlit documentation here.
The problem here is that the tutorial tells me to create a file like this: .streamlit/secrets.toml and fill it with the following information (copy-pasting the syntax):
[
mysql
]
host = "localhost"
port = 3306
database = "xxx"
user = "xxx"
password = "xxx"
Everything was going good up until now but when I paste my secret.toml info in the SECRET MANAGEMENT widget (it is prompted when I am creating a new app in Streamlit cloud) it gives me a syntax error.
Invalid format: please enter valid TOML.
Up untill this point I was going by the book(tutorial). Now to go over this I tried using only the variable definitions like following (since I am not aware of the .toml syntax):
db_user = "root"
db_name = "dbname"
db_password = "123abc"
Am I doing this right? Or am I missing something obvious?
With all of that aside, I also need to know how to call dependencies on stream cloud for my app. For example, I need mysql-connector-python module but I don't see any console with which I can do that
NOTE:
This is my first time deploying an app on the cloud
[
mysql
]
It should be [mysql] in one line
In your GitHub repo, add requirements.txt file with your dependencies.
streamlit cloud will install those packages for your app.
I want to notify another way we can use a database within a Streamlit App rather than using the conventional method.
We can refer to this Medium.com article here.
It explains a way in which we can use Pandas Library to load a database and it also updates in real-time. By using this knowledge, connecting to a database becomes a "Python" problem, not a "streamlit" problem.
Assuming we are using MySQL
We can, according to the official tutorial for MySQL Database, create a .streamlit/secrets.toml file in which we will store our information(related to our database) as below:
# .streamlit/secrets.toml
[
mysql
]
host = "localhost"
port = 3306
database = "xxx"
user = "xxx"
password = "xxx"
Also install mysql-connector-python for python and import it on your application file. you will also need Pandas and toml Ofcourse:
pip install mysql-connector-python pandas toml
Here is what each of them do:
| Library | It's use |
| -------- | -------------- |
| mysql-connector-python | to connect to our database |
| pandas | to read and convert our database table into a Dataframe |
|toml| to read details from secrets.toml file |
STEP 1
We read details from secrets.toml
# Reading data
toml_data = toml.load("secrets.toml")
# saving each credential into a variable
HOST_NAME = toml_data['mysql']['host']
DATABASE = toml_data['mysql']['database']
PASSWORD = toml_data['mysql']['password']
USER = toml_data['mysql']['user']
PORT = toml_data['mysql']['port']
STEP 2
Connecting to our Database:
# Using the variables we read from secrets.toml
mydb = connection.connect(host=HOST_NAME, database=DATABASE, user=USER, passwd=PASSWORD, use_pure=True)
STEP 3
Making queries from our database:
query = pd.read_sql('SELECT * FROM mytable;' , mydb)
The query variable is now a display-able table in streamlit or Jupyter notebooks
Likewise, we can make any MySQL query(syntax applied) we want from our database.
This information is based on my own experience.
I have a setup where i need a proxy in front on a server.
LightTpd 1.4.13 is already used on the embedded platform which should act as proxy.
Newer lighttpd's is not easily build due to an old toolchain.
One port (e.g. port 84) of the proxy platform should forward all traffic to port 80 on the server.
Some simple pages are forwarded just fine, but some other fail. The server has as "web_resp.exe", this is returned as a download option of 0 byte.
Wireshark dumping
Dumps with Wireshark show that the needed pages are send the proxy-platform, but 0 bytes are forwarded. (this was performed on a similar setup)
Question
Is my configuration wrong?
Is it impossible on lighttpd 1.4.13? (i have seen forum-post telling the mod_proxy of lighttpd has problem in general)
Reproducibility
I have reproduced the flaw by running Lighttpd on a new mintLinux (same error type)
I get the same error when forwarding to other ip/site (a web-config of a ethernet -> rs232-port unit).
Exactly what triggers the error is do not know, maybee just too large pages.
Configuration
#lighttpd configuration file
server.modules = (
"mod_proxy"
)
## a static document-root, for virtual-hosting take look at the
## server.virtual-* options
server.document-root = "/tmp/"
## where to send error-messages to
server.errorlog = "/tmp/lighttpd.error.log"
## bind to port (default: 80)
server.port = 84
#### proxy module
## read proxy.txt for more info
proxy.debug = 1
proxy.server = ( "" =>
(
( "host" => "10.0.0.175", "port" => 80)
)
)
Debug dumps
functional and non-functional request seem similar.
However the non-functional read larger size of data (it is still to considered small size <100 kB)
other tests
lighttpd 1.4.35 compiled for the target, but it seem to fail in same way.
lighttpd 1.4.35 neither work on the mintLinux.
1.4.35 + rewrite trick...
works worse than directly using a port
lighttpd 1.5 works out of the box (after installing gthread2) on a mintLinux. However will not work for the target hardware.
The issue have been found to be faulty http headers provided by the backend.
The issue was submitted to the Lighttpd-bug site https://redmine.lighttpd.net/issues/2594#change-8877
Lighttpd now have support for webpages only sending \LF as opposed to \CR\LF
You may argue that the bug is in the target web-page. However in by case i was unable to modify the target site.
I just downloaded R package called sqldf just for fun, but have not been able to run it correctly so far. When I try to do some query using iris datasets:
sqldf("select * from iris limit 5")
the error occurred saying Error in mysqlNewConnection(drv, ...) :
RS-DBI driver: (Failed to connect to database: Error: Access denied for user 'myUserName'#'localhost' (using password: NO)
)
Error in !dbPreExists : invalid argument type
So I opened its help documentation and then run the following query:
sqldf("select * from iris limit 5", user="myUser")
the error message is the same as the above, which would mean that I failed to specify my user argument correctly, given that the error message doesn't change to Access denied for user 'myUser'#'localhost').
So how can I fix it and run it correctly?
For your information when I use RMySQL, I use the following arguments in order to make connection.
con <- dbConnect(dbDriver("MySQL"),username="myUser",password="myPass",host="myHost",unix.sock="/tmp/mysql.sock",dbname="myDB")
I'm on OS X 10.9.1 and use MySQL 5.6 installed via homebrew, and R version 3.0.2 and sqldf version 0.4-6.
Thanks.
I would start by making sure that sqldf works with SQLLite.
head(sqldf("select * from iris",drv='SQLite'))
Next, I would highly recommend that you always use the drv= param or explicitly set the sqldf.driver variable. Reling on the order of library(..) calls can cause bugs later.
If you are doing something simple, you can use SQLLite because it is fast and has few dependencies (so, for example, if you move your code, you dont have to install MySQL). If you are using Dates, SQLLite is not great so you might want to use MySQL.
As mentioned in the comments and in the notes at the bottom of ?sqldf, you need to setup a my.cnf for mysql with a [Client] section so that all clients including sqldf use a specific login. Also make sure the dbname is set like so:
options(sqldf.driver = "RMySQL")
options(RMySQL.dbname = "rtest")
head(sqldf("select * from iris"))
I havent found a reliable way to use MySQL without a my.cnf. Using an explicit connection for sqldf also doesnt work for me. I think PostreSQL is better because you can set the username,etc from inside R per the documentation.
I faced the first error because sqldf was using mysql by default in my case. Than, I switched it to SQLite and it worked using the following command.
options(sqldf.driver = "SQLite")
if you have loaded RMySQL Package try to detach both RMySQL and sqldf packages and load sqldf package.its works for me. please see my below code
detach(package:RMySQL)
detach(package:sqldf)
library(sqldf)
sqldf("select * from iris limit 5",user="sa", password = "root", host = "192.168.200.182", port=3377)
I'm having a bit of trouble successfully using pyodbc on Debian Lenny (5.0.7). Specifically, I appear to be having trouble fetching NVARCHAR values (not a SQL Server expert, so go easy on me :) ).
Most traditional queries work OK. For instance, a count of rows in table1 yields
cursor.execute("SELECT count(id) from table1")
<pyodbc.Cursor object at 0xb7b9b170>
>>> cursor.fetchall()
[(27, )]
As does a full dump of ids
>>> cursor.execute("SELECT id FROM table1")
<pyodbc.Cursor object at 0xb7b9b170>
>>> cursor.fetchall()
[(0.0, ), (3.0, ), (4.0, ), (5.0, ), (6.0, ), (7.0, ), (8.0, ), (11.0, ), (12.0, ), (18.0, ), (19.0, ), (20.0, ), (21.0, ), (22.0, ), (23.0, ), (24.0, ), (25.0, ), (26.0, ), (27.0, ), (28.0, ), (29.0, ), (32.0, ), (33.0, ), (34.0, ), (35.0, ), (36.0, ), (37.0, )]
But a dump of names (again, of type NVARCHAR) does not
>>> cursor.execute("SELECT name FROM table1")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
pyodbc.ProgrammingError: ('42000', '[42000] [FreeTDS][SQL Server]Unicode data in a Unicode-only collation or ntext data cannot be sent to clients using DB-Library (such as ISQL) or ODBC version 3.7 or earlier. (4004) (SQLExecDirectW)')
... the critical error being
pyodbc.ProgrammingError: ('42000', '[42000] [FreeTDS][SQL Server]Unicode data in a Unicode-only collation or ntext data cannot be sent to clients using DB-Library (such as ISQL) or ODBC version 3.7 or earlier. (4004) (SQLExecDirectW)')
This is consistent across tables.
I've tried a variety of different versions of each, but now I'm running unixODBC 2.2.11 (from lenny repos), FreeTDS 0.91 (built from source, with ./configure --enable-msdblib --with-tdsver=8.0), and pyodbc 3.0.3 (built from source).
With a similar combination (unixODBC 2.3.0, FreeTDS 0.91, pyodbc 3.0.3), the same code works on Mac OS X 10.7.2.
I've searched high and low, investigating the solutions presented here and here and recompiling different versions of unixODBC and FreeTDS, but still no dice. Relevant configuration files provided below:
user#host:~$ cat /usr/local/etc/freetds.conf
#$Id: freetds.conf,v 1.12 2007/12/25 06:02:36 jklowden Exp $
#
# This file is installed by FreeTDS if no file by the same
# name is found in the installation directory.
#
# For information about the layout of this file and its settings,
# see the freetds.conf manpage "man freetds.conf".
# Global settings are overridden by those in a database
# server specific section
[global]
# TDS protocol version
tds version = 8.0
client charset = UTF-8
# Whether to write a TDSDUMP file for diagnostic purposes
# (setting this to /tmp is insecure on a multi-user system)
; dump file = /tmp/freetds.log
; debug flags = 0xffff
# Command and connection timeouts
; timeout = 10
; connect timeout = 10
# If you get out-of-memory errors, it may mean that your client
# is trying to allocate a huge buffer for a TEXT field.
# Try setting 'text size' to a more reasonable limit
text size = 64512
# A typical Sybase server
[egServer50]
host = symachine.domain.com
port = 5000
tds version = 5.0
# A typical Microsoft server
[egServer70]
host = ntmachine.domain.com
port = 1433
tds version = 8.0
[foo]
host = foo.bar.com
port = 1433
tds version = 8.0
user#host:~$ cat /etc/odbc.ini
[foo]
Description = Foo
Driver = foobar
Trace = No
Database = db
Server = foo.bar.com
Port = 1433
TDS_Version = 8.0
user#host:~$ cat /etc/odbcinst.ini
[foobar]
Description = Description
Driver = /usr/lib/odbc/libtdsodbc.so
Setup = /usr/lib/odbc/libtdsS.so
CPTimeout =
CPReuse =
Any advice or direction would be very much appreciated!
I encountered the same error with Ubuntu. I "solved" it with a work around.
All you need to do is to set the environment variable TDSVER.
import os
os.environ['TDSVER'] = '8.0'
As I said it is not a real "solution" but it works.
Try to add
TDS_Version=8.0;ClientCharset=UTF-8
in your connection string.
For example,
DRIVER=FreeTDS;SERVER=myserver;DATABASE=mydatebase;UID=me;PWD=pwd;TDS_Version=8.0;ClientCharset=UTF-8
Cant you just side step the issue and either Convert or Cast name to something it can handle?
cursor.execute("SELECT CAST(name AS TEXT) FROM table")