So I am trying to use my (continously updating) database on MySQL with some visualizations which I want to put into my Streamlit app. In other words, I want to use the data from MySQL database in my Streamlit application.
For this purpose I consulted the official streamlit documentation here.
The problem here is that the tutorial tells me to create a file like this: .streamlit/secrets.toml and fill it with the following information (copy-pasting the syntax):
[
mysql
]
host = "localhost"
port = 3306
database = "xxx"
user = "xxx"
password = "xxx"
Everything was going good up until now but when I paste my secret.toml info in the SECRET MANAGEMENT widget (it is prompted when I am creating a new app in Streamlit cloud) it gives me a syntax error.
Invalid format: please enter valid TOML.
Up untill this point I was going by the book(tutorial). Now to go over this I tried using only the variable definitions like following (since I am not aware of the .toml syntax):
db_user = "root"
db_name = "dbname"
db_password = "123abc"
Am I doing this right? Or am I missing something obvious?
With all of that aside, I also need to know how to call dependencies on stream cloud for my app. For example, I need mysql-connector-python module but I don't see any console with which I can do that
NOTE:
This is my first time deploying an app on the cloud
[
mysql
]
It should be [mysql] in one line
In your GitHub repo, add requirements.txt file with your dependencies.
streamlit cloud will install those packages for your app.
I want to notify another way we can use a database within a Streamlit App rather than using the conventional method.
We can refer to this Medium.com article here.
It explains a way in which we can use Pandas Library to load a database and it also updates in real-time. By using this knowledge, connecting to a database becomes a "Python" problem, not a "streamlit" problem.
Assuming we are using MySQL
We can, according to the official tutorial for MySQL Database, create a .streamlit/secrets.toml file in which we will store our information(related to our database) as below:
# .streamlit/secrets.toml
[
mysql
]
host = "localhost"
port = 3306
database = "xxx"
user = "xxx"
password = "xxx"
Also install mysql-connector-python for python and import it on your application file. you will also need Pandas and toml Ofcourse:
pip install mysql-connector-python pandas toml
Here is what each of them do:
| Library | It's use |
| -------- | -------------- |
| mysql-connector-python | to connect to our database |
| pandas | to read and convert our database table into a Dataframe |
|toml| to read details from secrets.toml file |
STEP 1
We read details from secrets.toml
# Reading data
toml_data = toml.load("secrets.toml")
# saving each credential into a variable
HOST_NAME = toml_data['mysql']['host']
DATABASE = toml_data['mysql']['database']
PASSWORD = toml_data['mysql']['password']
USER = toml_data['mysql']['user']
PORT = toml_data['mysql']['port']
STEP 2
Connecting to our Database:
# Using the variables we read from secrets.toml
mydb = connection.connect(host=HOST_NAME, database=DATABASE, user=USER, passwd=PASSWORD, use_pure=True)
STEP 3
Making queries from our database:
query = pd.read_sql('SELECT * FROM mytable;' , mydb)
The query variable is now a display-able table in streamlit or Jupyter notebooks
Likewise, we can make any MySQL query(syntax applied) we want from our database.
This information is based on my own experience.
Related
I have been trying to connect to my SQL that I am trying to create. I recently downloaded MySQL, the workbench, the connector ODBC, and the ODBC Manager, but I can't find the solution to solve the error for the connection.
Do I need to download anything else? I can't find a solution on internet or youtube for Mac.
packages_required = c("quantmod", "RSQLite", "data.table", "lubridate", "pbapply", "DBI")
install.packages(packages_required)
library("quantmod")
library("RSQLite")
library("data.table")
library("lubridate")
library("pbapply")
library("odbc")
PASS <- new.env()
assign("pwd","My Password",envir=PASS)
library("DBI")
con <- dbConnect(odbc(), Driver = "/usr/local/mysql-connector-odbc-8.0.28-macos11-x86-64bit/lib/libmyodbc8w.so",
Server = "localhost", Database = "data", UID = "root", PWD = PASS$pwd,
Port = 3306)
-----------------------------------------------------------------------------------------
> con <- dbConnect(odbc(), Driver = "/usr/local/mysql-connector-odbc-8.0.28-macos11-x86-64bit/lib/libmyodbc8w.so",
+ Server = "localhost", Database = "data", UID = "root", PWD = PASS$pwd,
+ Port = 3306)
Error: nanodbc/nanodbc.cpp:1021: 00000: [
>
Thank you
Presuming you're on Windows, try creating an ODBC connection using the most recent driver. The ODBC data sources tool should already be installed, you just need to open it and create a new one.
Press the windows key (or click the search spyglass) and type in "ODBC." The "ODBC Data Sources (64-bit)" tool should come up.
How to Create an ODBC Connection in Windows
Open the "ODBC Data Sources (64-bit)" application
Click "Add"
Choose"MySQL ODBC 8.0 Unicode Driver" (or whatever the newest version you
have is). If you don't have it, you can download it here:
https://dev.mysql.com/downloads/connector/odbc/
Enter the following information:
Data source name (the example code below uses "my_odbc_connection"), TCP/IP Server, Port, User and Password
Click "Details" to expand the box.
In the "Connection" tab, you may need to check the "Enable Cleartext
Authentication" box. Could depend on your system configuratoin.
Click "Test" to test the connection. If everything went right you
should get "Connection Successful" message. If you aren't able to get a
successful connection, make sure that you have access and that your
connection information doesn't have any typos.
After making a successful connection, perform these 2 additional steps (the
drop-downs won't populate until you connect successfully):
Click the "Database" drop down to choose the default database that you'll
be writing data to. If you will be writing to more than 1 database
then you may need to create a separate connection for each database
that you'll be writing to, specifying the default database
differently for each one.
Click the "Character Set" drop down and choose utf8.
You should now able to use the "DBI" and "ODBC" packages to read, write, etc. any data directly from R. Specific settings listed above may or may not apply depending on situation.
See the example code below.
Further reading: https://www.r-bloggers.com/setting-up-an-odbc-connection-with-ms-sql-server-on-windows/
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#~~ Load or install packages
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Load or install librarian
if(require(librarian) == FALSE){
install.packages("librarian")
if(require(librarian)== FALSE){stop("Unable to install and load librarian")}
}
# Load multiple packages using the librarian package
librarian::shelf(tidyverse, readxl, DBI, lubridate, ODBC, quiet = TRUE)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#~~ Read
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Connect to a pre-defined ODBC connection named "my_odbc_connection"
conn <- DBI::dbConnect(odbc::odbc(), "my_odbc_connection")
# Create a query
query <- "
SELECT *
FROM YOUR_SCHEMA.YOUR_TABLE;
"
# Run the query
df_data <- DBI::dbGetQuery(conn,query)
# Close the open connection
try(DBI::dbDisconnect(conn), silent = TRUE)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#~~ Write
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Define the connection and database you'll be writing to
conn <- DBI::dbConnect(odbc::odbc(), "my_odbc_connection", db ="YOUR_DEFAULT_DB")
# Define variable types for your data frame. As a general rule, it's a good idea to define your data types rather than let the package guess.
field_types <- c("INTEGER","VARCHAR(20)","DATE","DATETIME","VARCHAR(20)","VARCHAR(50)","VARCHAR(20)")
names(field_types) <- names(df_data)
# Record start time
start_time <- Sys.time()
# Example write statement
DBI::dbWriteTable(conn,"YOUR_TABLE_NAME",YOUR_DATA_FRAME,overwrite=TRUE,field.types=field_types, row.names = FALSE)
# Print time difference
print("Writing complete.")
print(Sys.time() - start_time)
# Close the open connection
try(DBI::dbDisconnect(conn), silent = TRUE)
This is the error :
reverie-pc#reveriepc-Latitude-3400:~/VasanthkumarV/prisma$ sudo npm install -g prisma
[sudo] password for reverie-pc:
npm WARN deprecated request#2.88.2: request has been deprecated, see
https://github.com/request/request/issues/3142
/usr/bin/prisma -> /usr/lib/node_modules/prisma/dist/index.js
+ prisma#1.34.10
updated 1 package in 29.734s
(base) reverie-pc#reveriepc-Latitude-3400:~/VasanthkumarV/prisma$ prisma init test
? Set up a new Prisma server or deploy to an existing server? Use existing database
? What kind of database do you want to deploy to? MySQL
? Does your database contain existing data? Yes
? Enter database host localhost
? Enter database port 3306
? Enter database user root
? Enter database password [hidden]
? Please select the schema you want to introspect database_test
Introspecting database database_test 435ms
Created datamodel definition based on 24 tables.
? Select the programming language for the generated Prisma client Prisma JavaScript Client
Created 3 new files:
prisma.yml Prisma service definition
datamodel.prisma GraphQL SDL-based datamodel (foundation for database)
docker-compose.yml Docker configuration file
Next steps:
1. Open folder: cd test
2. Start your Prisma server: docker-compose up -d
3. Deploy your Prisma service: prisma deploy
4. Read more about introspection:url
▸ Syntax Error: Expected Name, found Int "1"
Get in touch if you need help: https://slack.prisma.io/
To get more detailed output, run $ export DEBUG="*"
(node:14055) [DEP0066] DeprecationWarning: OutgoingMessage.prototype._headers is deprecated
Generating schema... !
How to resolve this error..and what is the procedure to connect Prisma server with local database (MySQL)?? and what about the prisma deployment??
How to connect prisma with existing db?
It looks like you are using Prisma 1 which is currently in maintenance mode.
Given that this looks like a new project, I'd suggest you take a look at Prisma 2 which includes many improvements and a simpler mental model.
default: on
# description: mysqlchk
service mysqlchk
{
# this is a config for xinetd, place it in /etc/xinetd.d/
disable = no
flags = REUSE
socket_type = stream
type = UNLISTED
port = 9200
wait = no
user = root
server = /usr/bin/mysqlclustercheck
log_on_failure += USERID
only_from = 0.0.0.0/0
#
# Passing arguments to clustercheck
# <user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>"
# Recommended: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.local"
# Compatibility: server_args = user pass 1 /var/log/log-file 1 /etc/my.cnf.local"
# 55-to-56 upgrade: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.extra"
#
# recommended to put the IPs that need
# to connect exclusively (security purposes)
per_source = UNLIMITED
}
/etc/xinetd.d #
It is kind of strange that script works fine when run manually when it runs using /etc/xinetd.d/ , it is not working as expected.
In mysqlclustercheck script, instead of using --user= and passord= syntax, I am using --login-path= syntax
script runs fine when I run using command line but status for xinetd was showing signal 13. After debugging, I have found that even simple command like this is not working
mysql_config_editor print --all >>/tmp/test.txt
We don't see any output generated when it is run using xinetd ( mysqlclustercheck)
Have you tried the following instead of /usr/bin/mysqlclustercheck?
server = /usr/bin/clustercheck
I am wondering if you could test your binary location with the linux which command.
A long time ago since this question was asked, but it just came to my attention.
First of all as mentioned, Percona Cluster Control script is called clustercheck, so make sure you are using the correct name and correct path.
Secondly, since the server script runs fine from command line, it seems to me that the path of mysql client command is not known by the xinetd when it runs the Cluster Control script.
Since the mysqlclustercheck script as it is offered from Percona, it uses only the binary name mysql without specifying the absolute path I suggest you do the following:
Find where mysql client command is located on your system:
ccloud#gal1:~> sudo -i
gal1:~ # which mysql
/usr/local/mysql/bin/mysql
gal1:~ #
then edit script /usr/bin/mysqlclustercheck and in the following line:
MYSQL_CMDLINE="mysql --defaults-extra-file=$DEFAULTS_EXTRA_FILE -nNE --connect-timeout=$TIMEOUT \
place the exact path of mysql client command you found in the previous step.
I also see that you are not using MySQL connection credentials for connecting to MySQL server. mysqlclustercheck script as it is offered from Percona, it uses User/Password in order to connect to MySQL server.
So normally, you should execute the script in the command line like:
gal1:~ # /usr/sbin/clustercheck haproxy haproxyMySQLpass
HTTP/1.1 200 OK
Content-Type: text/plain
Where haproxy/haproxyMySQLpass is the MySQL connection user/pass for HAProxy monitoring user.
Additionally, you should specify them to your script's xinetd settings like:
server = /usr/bin/mysqlclustercheck
server_args = haproxy haproxyMySQLpass
Last but not least, the signal 13 you are getting is because you try to write something in a script run by xinetd. If for example in your mysqlclustercheck you try to add a statement like
echo "debug message"
you probably going to see the broken pipe signal (13 in POSIX).
Finally, I had issues with this script using SLES 12.3 and I finally manage to run it not as 'nobody' but as 'root'.
Hope it helps
When I simply run the following code, I always gets this error.
s3 = boto3.resource('s3')
bucket_name = "python-sdk-sample-%s" % uuid.uuid4()
print("Creating new bucket with name:", bucket_name)
s3.create_bucket(Bucket=bucket_name)
I have saved my credential file in
C:\Users\myname\.aws\credentials, from where Boto should read my credentials.
Is my setting wrong?
Here is the output from boto3.set_stream_logger('botocore', level='DEBUG').
2015-10-24 14:22:28,761 botocore.credentials [DEBUG] Skipping environment variable credential check because profile name was explicitly set.
2015-10-24 14:22:28,761 botocore.credentials [DEBUG] Looking for credentials via: env
2015-10-24 14:22:28,773 botocore.credentials [DEBUG] Looking for credentials via: shared-credentials-file
2015-10-24 14:22:28,774 botocore.credentials [DEBUG] Looking for credentials via: config-file
2015-10-24 14:22:28,774 botocore.credentials [DEBUG] Looking for credentials via: ec2-credentials-file
2015-10-24 14:22:28,774 botocore.credentials [DEBUG] Looking for credentials via: boto-config
2015-10-24 14:22:28,774 botocore.credentials [DEBUG] Looking for credentials via: iam-role
try specifying keys manually
s3 = boto3.resource('s3',
aws_access_key_id=ACCESS_ID,
aws_secret_access_key= ACCESS_KEY)
Make sure you don't include your ACCESS_ID and ACCESS_KEY in the code directly for security concerns.
Consider using environment configs and injecting them in the code as suggested by #Tiger_Mike.
For Prod environments consider using rotating access keys:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_RotateAccessKey
I had the same issue and found out that the format of my ~/.aws/credentials file was wrong.
It worked with a file containing:
[default]
aws_access_key_id=XXXXXXXXXXXXXX
aws_secret_access_key=YYYYYYYYYYYYYYYYYYYYYYYYYYY
Note that there must be a profile name "[default]". Some official documentation make reference to a profile named "[credentials]", which did not work for me.
If you are looking for an alternative way, try adding your credentials using
AmazonCLI
from the terminal type:-
aws configure
then fill in your keys and region.
Make sure your ~/.aws/credentials file in Unix looks like this:
[MyProfile1]
aws_access_key_id = yourAccessId
aws_secret_access_key = yourSecretKey
[MyProfile2]
aws_access_key_id = yourAccessId
aws_secret_access_key = yourSecretKey
Your Python script should look like this, and it'll work:
from __future__ import print_function
import boto3
import os
os.environ['AWS_PROFILE'] = "MyProfile1"
os.environ['AWS_DEFAULT_REGION'] = "us-east-1"
ec2 = boto3.client('ec2')
# Retrieves all regions/endpoints that work with EC2
response = ec2.describe_regions()
print('Regions:', response['Regions'])
Source: https://boto3.readthedocs.io/en/latest/guide/configuration.html#interactive-configuration.
I also had the same issue,it can be solved by creating a config and credential file in the home directory. Below show the steps I did to solve this issue.
Create a config file :
touch ~/.aws/config
And in that file I entered the region
[default]
region = us-west-2
Then create the credential file:
touch ~/.aws/credentials
Then enter your credentials
[Profile1]
aws_access_key_id = XXXXXXXXXXXXXXXXXXXX
aws_secret_access_key = YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
After set all these, then my python file to connect bucket. Run this file will list all the contents.
import boto3
import os
os.environ['AWS_PROFILE'] = "Profile1"
os.environ['AWS_DEFAULT_REGION'] = "us-west-2"
s3 = boto3.client('s3', region_name='us-west-2')
print("[INFO:] Connecting to cloud")
# Retrieves all regions/endpoints that work with S3
response = s3.list_buckets()
print('Regions:', response)
You can also refer below links:
Amazon S3 with Python Boto3 Library
Boto 3 documentation
Boto3: Amazon S3 as Python Object Store
from the terminal type:-
aws configure
then fill in your keys and region.
after this do next step use any environment. You can have multiple keys depending your account. Can manage multiple enviroment or keys
import boto3
aws_session = boto3.Session(profile_name="prod")
# Create an S3 client
s3 = aws_session.client('s3')
Create an S3 client object with your credentials
AWS_S3_CREDS = {
"aws_access_key_id":"your access key", # os.getenv("AWS_ACCESS_KEY")
"aws_secret_access_key":"your aws secret key" # os.getenv("AWS_SECRET_KEY")
}
s3_client = boto3.client('s3',**AWS_S3_CREDS)
It is always good to get credentials from os environment
To set Environment variables run the following commands in terminal
if linux or mac
$ export AWS_ACCESS_KEY="aws_access_key"
$ export AWS_SECRET_KEY="aws_secret_key"
if windows
c:System\> set AWS_ACCESS_KEY="aws_access_key"
c:System\> set AWS_SECRET_KEY="aws_secret_key"
Exporting the credential also work, In linux:
export AWS_SECRET_ACCESS_KEY="XXXXXXXXXXXX"
export AWS_ACCESS_KEY_ID="XXXXXXXXXXX"
These instructions are for windows machine with a single user profile for AWS. Make sure your ~/.aws/credentials file looks like this
[profile_name]
aws_access_key_id = yourAccessId
aws_secret_access_key = yourSecretKey
I had to set the AWS_DEFAULT_PROFILEenvironment variable to profile_name found in your credentials.
Then my python was able to connect. eg from here
import boto3
# Let's use Amazon S3
s3 = boto3.resource('s3')
# Print out bucket names
for bucket in s3.buckets.all():
print(bucket.name)
I work for a large corporation and encountered this same error, but needed a different work around. My issue was related to proxy settings. I had my proxy set up so I needed to set my no_proxy to whitelist AWS before I was able to get everything to work. You can set it in your bash script as well if you don't want to muddy up your Python code with os settings.
Python:
import os
os.environ["NO_PROXY"] = "s3.amazonaws.com"
Bash:
export no_proxy = "s3.amazonaws.com"
Edit: The above assume a US East S3 region. For other regions: use s3.[region].amazonaws.com where region is something like us-east-1 or us-west-2
If you have multiple aws profiles in ~/.aws/credentials like...
[Profile 1]
aws_access_key_id = *******************
aws_secret_access_key = ******************************************
[Profile 2]
aws_access_key_id = *******************
aws_secret_access_key = ******************************************
Follow two steps:
Make one you want to use as a default using export AWS_DEFAULT_PROFILE=Profile 1 command in terminal.
Make sure to run above command in the same terminal from where you use boto3 or you open an editor.[Understand the following scenario]
Scenario:
If you have two terminal open called t1 and t2.
And you run the export command in t1 and you open JupyterLab or any other from t2, you will get NoCredentialsError: Unable to locate credentials error.
Solution:
Run the export command in t1 and then open JupyterLab or any other from the same terminal t1.
In case of MLflow a call to mlflow.log_artifact() will raise this error if you cannot write to AWS3/MinIO data lake.
The reason is not setting up credentials in your python env (as these two env vars):
os.environ['DATA_AWS_ACCESS_KEY_ID'] = 'login'
os.environ['DATA_AWS_SECRET_ACCESS_KEY'] = 'password'
Note you may also access MLflow artifacts directly, using minio client (which requires a separate connection to the data lake, apart from mlflow's connection). This client can be started like this:
minio_client_mlflow = minio.Minio(os.environ['MLFLOW_S3_ENDPOINT_URL'].split('://')[1],
access_key=os.environ['AWS_ACCESS_KEY_ID'],
secret_key=os.environ['AWS_SECRET_ACCESS_KEY'],
secure=False)
I have solved the problem like this:
aws configure
Afterwards I manually entered:
AWS Access Key ID [None]: xxxxxxxxxx
AWS Secret Access Key [None]: xxxxxxxxxx
Default region name [None]: us-east-1
Default output format [None]: just hit enter
After that it worked for me
The boto3 is looking for the credentials in the folder like
C:\ProgramData\Anaconda3\envs\tensorflow\Lib\site-packages\botocore\.aws
You should save two files in this folder credentials and config.
You may want to check out the general order in which boto3 searches for credentials in this link. Look under the Configuring Credentials sub heading.
If you're sure you configure your aws correctly, just make sure the user of the project can read from ./aws or just run your project as a root
I just had this problem. This is what worked for me:
pip install botocore==1.13.20
Source: https://github.com/boto/botocore/issues/1892
In case of using AWS
In my case I had to add the following policy in IAM role to allow ec2 tags to be read by the EC2 instances. That would eliminate Unable to locate credentials error
:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "ec2:DescribeTags",
"Resource": "*"
}
]
}
I'm working on the front end of a webapp, and my co-developer is using Pyramid and SQAlchemy. We've just moved from SQLite to MySQL. I installed MySQL 5.6.15 (via Homebrew) on my OS X machine to get the Python MySQLdb install to work (via pip in a virtualenv).
Because in MySQL >= 5.6.5 secure_auth is now ON by default I can only connect to the remote database (pre 5.6.5) with the --skip-secure-auth flag, which works fine in a terminal.
However, in the Python Pyramid code, it only seems possible to add this flag as an argument to create_engine(), but I can't find create_engine() in my co-dev's code, only the connection string below in an initialisation config file. He's not available, this isn't my area of expertise, and we launch next week :(
sqlalchemy.url = mysql+mysqldb://gooddeeds:deeds808letme1now#146.227.24.38/gooddeeds_development?charset=utf8
I've tried appending various "secure auth" strings to the above with no success. Am I looking in the wrong place? Has MySQLdb set secure_auth to ON because I'm running MySQL 5.6.15? If so, how can I change that?
If you are forced to use the old passwords (bah!) when using MySQL 5.6, and using MySQLdb with SQLAlchemy, you'll have to add the --skip-secure-auth to an option file and use URL:
from sqlalchemy.engine.url import URL
..
dialect_options = {
'read_default_file': '/path/to/your/mysql.cnf',
}
engine = create_engine(URL(
'mysql',
username='..', password='..',
host='..', database='..',
query=dialect_options
))
The mysql.cnf would contain:
[client]
skip-secure-auth
For Pyramid, you can do the following. Add a line in your configuration ini-file that holds the connection arguments:
sqlalchemy.url = mysql://scott:tiger#localhost/test
sqlalchemy.connect_args = { 'read_default_file': '/path/to/foo' }
Now you need to change a bit the way the settings are read and used. In the file that launches your Pyramic app, do the following:
def main(global_config, **settings):
try:
settings['sqlalchemy.connect_args'] = eval(settings['sqlalchemy.connect_args'])
except KeyError:
settings['sqlalchemy.connect_args'] = {}
engine = engine_from_config(settings, 'sqlalchemy.')
# rest of code..
The trick is to evaluate the string in the ini file which contains a dictionary with the extra options for the SQLAlchemy dialect.