web2py doesn't connect to mysql - mysql

I installed web2py as source and wanted to use DAL without the rest of the framework.
But DAL does not connect to mysql:
>>> DAL('mysql://user1:user1#localhost/test_rma')
...
RuntimeError: Failure to connect, tried 5 times:
'NoneType' object has no attribute 'connect'
Whereas MySQLdb can connect to the database with the same credentials:
>>> import MySQLdb
>>> db = MySQLdb.connect(host='localhost', user='user1', passwd='user1', db='test_rma')
A similar problem with MsSQL was solved by explicitly setting the driver object. I tried the same solution:
>>> from gluon.dal import MySQLAdapter
>>> print MySQLAdapter.driver
None
>>> driver = globals().get('MySQLdb',None)
>>> print MySQLAdapter.driver
None
But still the driver is None.

Ok, I found the solution of the problem. I had to write:
MySQLAdapter.driver = globals().get('MySQLdb',None)
instead of
driver = globals().get('MySQLdb',None)
I misread that line in the original question.

Related

connect prestodb through sqlalchemy

I'd like to connect to prestodb with SQLalchemy interface. I'm running prestodb==0.7.0 and SQLalchemy== 1.4.20 and SQLalchemy doesn't seem to have prestodb baked in:
NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:presto
Not much luck with registering the prestodb either:
from sqlalchemy.dialects import registry
import prestodb
from prestodb.dbapi import Connection
registry.register('presto', 'prestodb.dbapi', 'Connection')
from sqlalchemy.engine import create_engine
port = 8889
user = os.environ["USER"]
engine = create_engine(f'presto://{user}#presto:{port}/hive',
connect_args={'protocol': 'https', 'requests_kwargs': {'verify': False}})
db = engine.raw_connection()
# AttributeError: type object 'Connection' has no attribute 'get_dialect_cls'
Any ideas?
If you have a look at the Dialects docs you will see that Presto is a external dialect and needs to be installed separately. The Presto dialect is supported through PiHyve and can be installed using pip install 'pyhive[presto]'.

sqlalchemy.exc.InterfaceError: (pyodbc.InterfaceError) ('28000', "[28000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Login failed for user

Trying to connect to MSSQL DB with pyodbc and SQLAlchemy using code (from another SO post) like
import json
import urllib
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
CONF = json.load(open(f"{PROJECT_HOME}/configs/configs.json"))
assert isinstance(CONF, ty.Dict)
params = urllib.parse.quote_plus(f"DRIVER={{ODBC Driver 17 for SQL Server}};"
f"SERVER={CONF['db']['db_ip']};"
f"DATABASE={CONF['db']['dest_db']};"
f"UID={CONF['db']['username']};"
f"PWD={CONF['db']['password']}")
engine = create_engine(f"mssql+pyodbc:///?odbc_connect={params}")
print(params)
sql_conn = engine.raw_connection()
cursor = sql_conn.cursor()
and getting error
sqlalchemy.exc.InterfaceError: (pyodbc.InterfaceError) ('28000', "[28000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Login failed for user 'myuser'. (18456) (SQLDriverConnect)")
Looking at the URL that gets generated, it looks like
mssql+pyodbc:///?odbc_connect=DRIVER%3D%7BODBC+Driver+17+for+SQL+Server%7D%3BSERVER%3D172.12.3.45%3BDATABASE%3Dmy_db%3BUID%3Dmyuser%3BPWD%3Dpass%2Bword10
even though 1) I can connect to the same DB via mssql-tools (eg. via bcp) using the same credentials I'm using here and 2) this same code snippet does work for another script that accesses a different DB (though was a slightly older version of SQLAlchemy (v1.3.15), but same ODBC DSN).
Any ideas what could be going wrong in this case?
The configs I'm using in the code above look like
{
"db":
{
"db_dsn": "MyMSSQLServer",
"db_ip": "172.12.3.45",
"dest_db": "my_db",
"dest_table": "my_table",
"username": "myuser",
"password": "pass+word10"
}
}
Note that my password here is "pass+word10" w/ a plus sign. When I look at the docs for urllib.parse.quote_plus, it seems like the plus sign should not be an issue, but I may be misinterpreting that and I can't think of what else it could be (since, again this general snippet has already worked in the past).
My various package versions are...
(venv) $ pip freeze | grep odbc
pyodbc==4.0.30
(venv) $ pip freeze | grep SQL
Flask-SQLAlchemy==2.5.1
SQLAlchemy==1.4.7
SQLAlchemy-JSONField==1.0.0
SQLAlchemy-Utils==0.37.0
Have also tried
engine = create_engine("mssql+pyodbc:///?odbc_connect="
"DRIVER={ODBC Driver 17 for SQL Server};"
f"SERVER={CONF['db']['db_ip']};"
f"DATABASE={CONF['db']['dest_db']};"
f"UID={CONF['db']['username']};"
f"PWD={CONF['db']['password']}")
(w/ and w/out "+"s between the driver words) showing the engine string as
Engine(mssql+pyodbc:///?DATABASE=mydb&PWD=mypassword&SERVER=172.12.3.45&UID=myuser&odbc_connect=DRIVER%3D%7BODBC+Driver+17+for+SQL+Server%7D)
and gotten the error
pyodbc.OperationalError: ('08001', '[08001] [Microsoft][ODBC Driver 17 for SQL Server]Neither DSN nor SERVER keyword supplied (0) (SQLDriverConnect)')
Even though I can see a DSN in my odbc.ini like...
$cat /etc/odbc.ini
[MyMSSQLServer]
Driver=ODBC Driver 17 for SQL Server
Description=My MS SQL Server
Trace=No
Server=172.12.3.45
Anyone with more experience with this know what could be going on here? Any debugging stuff I could try or info that could help this post?
* I know this post title is not super informative for what the problem seems to be, but I don't really have a great idea of what the problem is even related to atm, so would need to get more info on the problem before being able to edit to make a more relevant title.

Cant connect to Mysql database from pyspark, getting jdbc error

I am learning pyspark, and trying to connect to a mysql database.
But i am getting a java.lang.ClassNotFoundException: com.mysql.jdbc.Driver Exception while running the code. I have spent a whole day trying to fix it, any help would be appreciated :)
I am using pycharm community edition with anaconda and python 3.6.3
Here is my code:
from pyspark import SparkContext,SQLContext
sc= SparkContext()
sqlContext= SQLContext(sc)
df = sqlContext.read.format("jdbc").options(
url ="jdbc:mysql://192.168.0.11:3306/my_db_name",
driver = "com.mysql.jdbc.Driver",
dbtable = "billing",
user="root",
password="root").load()
Here is the error:
py4j.protocol.Py4JJavaError: An error occurred while calling o27.load.
: java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
This got asked 9 months ago at the time of writing, but since there's no answer, there it goes. I was in the same situation, searched stackoverflow over and over, tried different suggestions but the answer finally is absurdly simple: You just have to COPY the MySQL driver into the "jars" folder of Spark!
Download here https://dev.mysql.com/downloads/connector/j/5.1.html
I'm using the 5.1 version, although 8.0 exists, but I had some other problems when running the latest version with Spark 2.3.2 (had also other problems running Spark 2.4 on Windows 10).
Once downloaded you can just copy it into your Spark folder
E:\spark232_hadoop27\jars\ (use your own drive:\folder_name -- this is just an example)
You should have two files:
E:\spark232_hadoop27\jars\mysql-connector-java-5.1.47-bin.jar
E:\spark232_hadoop27\jars\mysql-connector-java-5.1.47.jar
After that the following code launched through pyCharm or jupyter notebook should work (as long as you have a MySQL database set up, that is):
import findspark
findspark.init()
import pyspark # only run after findspark.init()
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
dataframe_mysql = spark.read.format("jdbc").options(
url="jdbc:mysql://localhost:3306/uoc2",
driver = "com.mysql.jdbc.Driver",
dbtable = "company",
user="root",
password="password").load()
dataframe_mysql.show()
Bear in mind, I'm working currently locally with my Spark setup, so no real clusters involved, and also no "production" kind of code which gets submitted to such a cluster. For something more elaborate this answer could help: MySQL read with PySpark
On my computer, #Kondado 's solution works only if I change the driver in the options:
driver = 'com.mysql.cj.jdbc.Driver'
I am using Spark 8.0 on Windows. I downloaded mysql-connector-java-8.0.15.jar, Platform Independent version from here. And copy it to 'C:\spark-2.4.0-bin-hadoop2.7\jars\'
My code in Pycharm looks like this:
#import findspark # not necessary
#findspark.init() # not necessary
from pyspark import SparkConf, SparkContext, sql
from pyspark.sql import SparkSession
sc = SparkSession.builder.getOrCreate()
sqlContext = sql.SQLContext(sc)
source_df = sqlContext.read.format('jdbc').options(
url='jdbc:mysql://localhost:3306/database1',
driver='com.mysql.cj.jdbc.Driver', #com.mysql.jdbc.Driver
dbtable='table1',
user='root',
password='****').load()
print (source_df)
source_df.show()
I dont know how to add jar file to ClassPath(can someone tell me how??) so I put it in the SparkSession config and it works fine.
spark = SparkSession \
.builder \
.appName('test') \
.master('local[*]') \
.enableHiveSupport() \
.config("spark.driver.extraClassPath", "<path to mysql-connector-java-5.1.49-bin.jar>") \
.getOrCreate()
df = spark.read.format("jdbc").option("url","jdbc:mysql://localhost/<database_name>").option("driver","com.mysql.jdbc.Driver").option("dbtable",<table_name>).option("user",<user>).option("password",<password>).load()
df.show()
This worked for me, pyspark with mssql
java version is 1.7.0_191
pyspark version is 2.1.2
Download the below jar files
sqljdbc41.jar
mssql-jdbc-6.2.2.jre7.jar
Paste the above jars inside jars folder in the virtual environment
test_env/lib/python3.6/site-packages/pyspark/jars
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('Practise').getOrCreate()
url = 'jdbc:sqlserver://your_host_name:your_port;databaseName=YOUR_DATABASE_NAME;useNTLMV2=true;'
df = spark.read.format('jdbc'
).option('url', url
).option('user', 'your_db_username'
).option('password','your_db_password'
).option('dbtable', 'YOUR_TABLE_NAME'
).option('driver', 'com.microsoft.sqlserver.jdbc.SQLServerDriver'
).load()

Celery + SQLAlchemy : DatabaseError: (DatabaseError) SSL error: decryption failed or bad record mac

Error in the title triggers sometimes when using celery with more than one worker on a postgresql db with SSL turned on.
I'm in a flask + SQLAlchemy configuration
As mentionned here : https://github.com/celery/celery/issues/634
the solution in the django-celery plugin was to simply dispose all db connection at the start of the task.
In flask + SQLAlchemy configuration, doing this worked for me :
from celery.signals import task_prerun
#task_prerun.connect
def on_task_init(*args, **kwargs):
engine.dispose()
in case you don't know what "engine" is and how to get it, see here : http://flask.pocoo.org/docs/patterns/sqlalchemy/

SQLAlchemy and adodbapi Database connection error

I'm attempting to connect to a mssql SQLExpress 2012 database using sqlalchemy 0.7.8 and adodapi 2.4.2.2 on IronPython 2.7.3
I am able to create a sqlalchemy engine, however when a query is made I get :
"TypeError: 'NoneType' object is unsubscriptable"
TraceBack:
Traceback (most recent call last):
File "C:\Program Files (x86)\IronPython 2.7\Lib\site-packages\SQLAlchemy-0.7.8-py2.7.egg\sqlalchemy\engine\base.py", line 878, in __init__
File "C:\Program Files (x86)\IronPython 2.7\Lib\site-packages\SQLAlchemy-0.7.8-py2.7.egg\sqlalchemy\engine\base.py", line 2558, in raw_connection
File "C:\Program Files (x86)\IronPython 2.7\Lib\site-packages\SQLAlchemy-0.7.8-py2.7.egg\sqlalchemy\pool.py", line 183, in unique_connection
File "<string>", line 9, in <module>
File "C:\Program Files (x86)\IronPython 2.7\Lib\site-packages\SQLAlchemy-0.7.8-py2.7.egg\sqlalchemy\engine\base.py", line 2472, in connect
TypeError: 'NoneType' object is unsubscriptable
Code being used:
def conn():
return adodbapi.connect('Provider=SQLOLEDB; Data Source=SERVER\SQLEXPRESS;
Initial Catalog=db; User ID=user; Password=pass;')
engine = create_engine('mssql+adodbapi:///', creator=conn,
echo = True, module=adodbapi)
adodbapi seems to work fine on it's own, ie. i can create a connection and then use a cursor to query without any problems, it seems to be something in sqlalchemy.
Anyone any ideas?
And we have a workaround:
import adodbapi
from sqlalchemy.engine import create_engine
from sqlalchemy.orm import sessionmaker
import sqlalchemy.pool as pool
def connect():
return adodbapi.connect('Provider=SQLOLEDB.1;Data Source=mypcname\SQLEXPRESS;\
Initial Catalog=dbname;User ID=user; Password=pass;')
mypool = pool.QueuePool(connect)
conn = mypool.connect()
curs = conn.cursor()
curs.execute('select 1') #anything that forces open the connection
engine = create_engine('mssql+adodbapi://', module=adodbapi, pool = mypool)
Session = sessionmaker()
Session.configure(bind=engine)
sess = Session()
With this my session object works as normal.
I'm probably not using the adodbapi dialect as intended by whoever made it, but I can't find any documentation, so this is what I've gone with for now.
Pretty sure adodbapi doesn't work with SQLAlchemy.
The adodbapi dialect is not implemented for 0.6 at this time.
Scroll to the very bottom, (this is 0.7x documentation), I also checked 0.8 documentation and it says the same thing.
Sounds like you'll have to change which driver you're using.
I use sqlalcmy to connect to a postgresql database using the psycopg2. I am not sure, but by reading the documentation, i think you need to download the pyodbc, it seems to be better supported than adodbapi. Once you have installed it, try the following statement to create the engine
engine = create_engine(mssql+pyodbc://user:pass#host/db)
Or you can check out different ways of writing the connection string here.