How to implement a context manager for MySql either with MySQL connector or pyMySql - mysql

Trying to implement a context manager for a MySql connection I get an error message.
I have use MySql.connector module including the connection option to connect a database, and the pyMySql module, always getting the same result.
import pymysql
class MySQLConnector:
def __init__(self, con_dict):
self.cnx = None
self.con_dict = con_dict
def __enter__(self):
self.cnx = pymysql.connect(**self.con_dict)
return self.cnx
def __exit__(self):
self.cnx.close()
class ReceiveBroke(QDialog):
def __init__(self, db, config):
super().__init__()
with MySQLConnector(config) as
cur = self.cnx.cnx.cursor()
qry = "SELECT * FROM horses
cur.execute(qry)
result = cur.fetchall()
print(result)
self.setUI()
self.conTest()
def conTest(self):
if self.cnx.ope
print("y")
I hope to obtain a working context manager closing the database connection a finishing the with Block.
Result: Error Message. Always after executing the last line within the "with" block :"TypeError exit takes one positional argument but four were given" at which time the program crashes.

hope you found the error already. If not, I think the error message is quite clear:
TypeError exit takes one positional argument but four were given
__exit__ function needs 4 arguments: self, type, value, traceback. These last 3 relate to any exception that may happen in the with block.

Related

Inserting to MySQL with mysql.connector - good practice/efficiency

I am working on a personal project and was wondering if my solution for inserting data to a MySQL database would be considered "pythonic" and efficient.
I have written a separate class for that, which will be called from an object which holds a dataframe. From there I am calling my save() function to write the dataframe to the database.
The script will be running once a day where I scrape some data from some websites and save it to my database. So it is important that it really runs through completely even when I have bad data or temporary connection issues (script and database run on different machines).
import mysql.connector
# custom logger
from myLog import logger
# custom class for formatting the data, a lot of potential errors are handled here
from myFormat import myFormat
# insert strings to mysql are stored and referenced here
import sqlStrings
class saveSQL:
def __init__(self):
self.frmt = myFormat()
self.host = 'XXX.XXX.XXX.XXX'
self.user = 'XXXXXXXX'
self.password = 'XXXXXXXX'
self.database = 'XXXXXXXX'
def save(self, payload, type):
match type:
case 'First':
return self.__first(payload)
case 'Second':
...
case _:
logger.error('Undefined Input for Type!')
def __first(self, payload):
try:
self.mydb = mysql.connector.connect(host=self.host,user=self.user,password=self.password,database=self.database)
mycursor = self.mydb.cursor()
except mysql.connector.Error as err:
logger.error('Couldn\'t establish connection to DB!')
try:
tmpList = payload.values.tolist()
except ValueError:
logger.error('Value error in converting dataframe to list: ' % payload)
try:
mycursor.executemany(sqlStrings.First, tmpList)
self.mydb.commit()
dbWrite = mycursor.rowcount
except mysql.connector.Error as err:
logger.error('Error in writing to database: %s' % err)
for ele in myList:
dbWrite = 0
try:
mycursor.execute(sqlStrings.First, ele)
self.mydb.commit()
dbWrite = dbWrite + mycursor.rowcount
except mysql.connector.Error as err:
logger.error('Error in writing to database: %s \n ele: %s' % [err,ele])
continue
pass
mycursor.close()
return dbWrite
Things I am wondering about:
Is the match case a good option to distinguish between writing to different tables depending on the data?
Are the different try/except blocks really necessary or are there easier ways of handling potential errors?
Do I really need the pass command at the end of the for-loop?

Trying to understand Django select_for_update() with MySQL and Processes

I have a django application that is backed by a MySQL database. I have recently moved a section of code out of the request flow and put it into a Process. The code uses select_for_update() to lock affected rows in the DB but now I am occasionally seeing the Process updating a record while it should be locked in the main Thread. If I switch my Executor from a ProcessPoolExecutor to a ThreadPoolExecutor the locking works as expected. I thought that select_for_update() operated at the database level so it shouldn't make any difference whether code is in Threads, Processes, or even on another machine - what am I missing?
I've boiled my code down to a sample that exhibits the same behaviour:
from concurrent import futures
import logging
from time import sleep
from django.db import transaction
from myapp.main.models import CompoundBase
logger = logging.getLogger()
executor = futures.ProcessPoolExecutor()
# executor = futures.ThreadPoolExecutor()
def test() -> None:
pk = setup()
f1 = executor.submit(select_and_sleep, pk)
f2 = executor.submit(sleep_and_update, pk)
futures.wait([f1, f2])
def setup() -> int:
cb = CompoundBase.objects.first()
cb.corporate_id = 'foo'
cb.save()
return cb.pk
def select_and_sleep(pk: int) -> None:
try:
with transaction.atomic():
cb = CompoundBase.objects.select_for_update().get(pk=pk)
print('Locking')
sleep(5)
cb.corporate_id = 'baz'
cb.save()
print('Updated after sleep')
except Exception:
logger.exception('select_and_sleep')
def sleep_and_update(pk: int) -> None:
try:
sleep(2)
print('Updating')
with transaction.atomic():
cb = CompoundBase.objects.select_for_update().get(pk=pk)
cb.corporate_id = 'bar'
cb.save()
print('Updated without sleep')
except Exception:
logger.exception('sleep_and_update')
test()
When run as shown I get:
Locking
Updating
Updated without sleep
Updated after sleep
But if I change to the ThreadPoolExecutor I get:
Locking
Updating
Updated after sleep
Updated without sleep
The good news is that it's mostly there, I did some reading around and based on an answer I found here
I am assuming that you are running on Linux as that seems to be the behaviour on the platform.
It looks like under Linux the default Process start strategy is the fork strategy, which is usually what you want, however in this exact circumstance it appears that resources (such as DB connections) are being shared, resulting in the db operations being treated as the same transaction and thus are not blocked. To get the behaviour you want, each process would appear to need its own resources and to not share resouces with its parent process (and subsequently any other children of the parent).
It is possible to get the behaviour you want using the following code, however be aware that I had to split the code into two files.
fn.py
from time import sleep
from django.db import transaction
import django
django.setup()
from myapp.main.models import CompoundBase
def setup() -> int:
cb = CompoundBase.objects.first()
cb.corporate_id = 'foo'
cb.save()
return cb.pk
def select_and_sleep(pk: int) -> None:
try:
with transaction.atomic():
cb = CompoundBase.objects.select_for_update().get(pk=pk)
print('Locking')
sleep(5)
cb.corporate_id = 'baz'
cb.save()
print('Updated after sleep')
except Exception:
logger.exception('select_and_sleep')
def sleep_and_update(pk: int) -> None:
try:
sleep(2)
print('Updating')
with transaction.atomic():
cb = CompoundBase.objects.select_for_update().get(pk=pk)
cb.corporate_id = 'bar'
cb.save()
print('Updated without sleep')
except Exception:
logger.exception('sleep_and_update')
proc_test.py
from concurrent import futures
from multiprocessing import get_context
from time import sleep
import logging
import fn
logger = logging.getLogger()
executor = futures.ProcessPoolExecutor(mp_context=get_context("forkserver"))
# executor = futures.ThreadPoolExecutor()
def test() -> None:
pk = fn.setup()
f1 = executor.submit(fn.select_and_sleep, pk)
f2 = executor.submit(fn.sleep_and_update, pk)
futures.wait([f1, f2])
test()
There are three strategies in starting a process, fork, spawn, and forkserver, using either spawn or forkserver appears to get you the behaviour that you are looking for.
References:
Multiprocessing Locks
Connections

Python script DB connection as Pool not working, but simple connection is working

I am writing a script in python 3 that is listening to the tunnel and saving and updating data inside MySQL depend on the message received.
I went into weird behavior, i did a simple connection to MySQL using pymysql module and everything worked fine, ut after sometime this simple connection closes.
So i decide to implement Pool connection to MySQL and here arises the problem. Something happens no errors, but the issue is the following:
My cursor = yield self._pool.execute(query, list(filters.values()))
cursor result = tornado_mysql.pools.Pool object at 0x0000019DE5D71F98
and stacks like that not doing anything more
If i remove yield from cursor pass that line and next line throws error
response = yield c.fetchall()
AttributeError: 'Future' object has no attribute 'fetchall'
How i can fix the MySQL pool connection to work properly?
What i tried:
I use few modules for pool connection, all goes in same issue
Did back simple connection with pymysql and worked again
Below my code:
python script file
import pika
from model import SyncModel
_model = SyncModel(conf, _server_id)
#coroutine
def main():
credentials = pika.PlainCredentials('user', 'password')
try:
cp = pika.ConnectionParameters(
host='127.0.0.1',
port=5671,
credentials=credentials,
ssl=False,
)
connection = pika.BlockingConnection(cp)
channel = connection.channel()
#coroutine
def callback(ch, method, properties, body):
if 'messageType' in properties.headers:
message_type = properties.headers['messageType']
if message_type in allowed_message_types:
result = ptoto_file._reflection.ParseMessage(descriptors[message_type], body)
if result:
result = protobuf_to_dict(result)
if message_type == 'MyMessage':
yield _model.message_event(data=result)
else:
print('Message type not in allowed list = ' + str(message_type))
print('continue listening...')
channel.basic_consume(callback, queue='queue', no_ack=True)
print(' [*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming()
except Exception as e:
print('Could not connect to host 127.0.0.1 on port 5671')
print(str(e))
if __name__ == '__main__':
main()
SyncModel
from tornado_mysql import pools
from tornado.gen import coroutine, Return
from tornado_mysql.cursors import DictCursor
class SyncModel(object):
def __init__(self, conf, server_id):
self.conf = conf
servers = [i for i in conf.mysql.servers]
for s in servers:
if s['server_id'] == server_id:
// s hold all data as, host, user, port, autocommit, charset, db, password
s['cursorclass'] = DictCursor
self._pool = pools.Pool(s, max_idle_connections=1, max_recycle_sec=3)
#coroutine
def message_event(self, data):
table_name = 'table_name'
query = ''
data = data['message']
filters = {
'id': data['id']
}
// here the connection fails as describe above
response = yield self.query_select(table_name, self._pool, filters=filters)
#coroutine
def query_select(self, table_name, _pool, filters=None):
if filters is None:
filters = {}
combined_filters = ['`%s` = %%s' % i for i in filters.keys()]
where = 'WHERE ' + ' AND '.join(combined_filters) if combined_filters else ''
query = """SELECT * FROM `%s` %s""" % (table_name, where)
c = self._pool.execute(query, list(filters.values()))
response = yield c.fetchall()
raise Return({response})
All the code was working with just simple connection to the database, after i start to use pool example is not working anymore. Will appreciate any help in this issue.
This is a stand alone script.
The pool connection was not working, so switched back to pymysql with double checking the connection
I would like to post my answer that worked, only this solution worked for me
before connecting to mysql to check if the connection is open, if not reconnect
if not self.mysql.open:
self.mysql.ping(reconnect=True)

how does SQLAlchemy InvalidRequestError('Transaction XXX is not on the active transaction list) happen?

Recently I've got a SQLAlchemy InvalidRequestError.
The error log shows:
InvalidRequestError: Transaction <sqlalchemy.orm.session.SessionTransaction object at
0x106830dd0> is not on the active transaction list
In what circumstance this error will be raised???
-----Edit----
# the following two line actually in my decorator
s = Session()
s.add(model1)
# refer to <http://techspot.zzzeek.org/2012/01/11/django-style-database-routers-in-sqlalchemy/>
s2 = Session().using_bind('master')
model2 = s2.query(Model2).with_lockmode('update').get(1)
model2.somecolumn = 'new'
s2.commit()
This exception is raised
-----Edit2 -----
s = Session().using_bind('master')
model = Model(user_id=123456)
s.add(model)
s.flush()
# here, raise the exception.
# I add log in get_bind() of RoutingSession. when doing 'flush', the _name is None, and it returns engines['slave'].
#If I use commit() instead of flush(), then it commits successfully
I change the using_bind method as the following and it works well.
def using_bind(self, name):
self._name = name
return self
The previous RoutingSession:
class RoutingSession(Session):
_name = None
def get_bind(self, mapper=None, clause=None):
logger.info(self._name)
if self._name:
return engines[self._name]
elif self._flushing:
logger.info('master')
return engines['master']
else:
logger.info('slave')
return engines['slave']
def using_bind(self, name):
s = RoutingSession()
vars(s).update(vars(self))
s._name = name
return s
that's an internal assertion which should never occur. There's no way to answer this question without at least a full stack trace, if perhaps you are improperly using the Session in a concurrent fashion, or manipulating its internals. I can only show that exception raised if I manipulate private methods or state pertaining to the Session object.
Here's that:
from sqlalchemy.orm import Session
s = Session()
s2 = Session()
t = s.transaction
t2 = s2.transaction
s2.transaction = t # nonsensical assignment of the SessionTransaction
# from one Session to also be referred to by another,
# corrupts the transaction chain by leaving out "t2".
# ".transaction" should never be assigned to on the outside
t2.rollback() # triggers the assertion case
basically, the above should never happen, since you're not supposed to assign to ".transaction". that's a read-only attribute.

WSGI application middleware to handle SQLAlchemy session

My WSGI application uses SQLAlchemy. I want to start session when request starts, commit it if it's dirty and request processing finished successfully, make rollback otherwise. So, I need to implement behavior of Django's TransactionMiddleware.
So, I suppose that I should create WSGI middleware and make following stuff:
Create and add DB session to environ on pre-processing.
Get DB session from environ and call commit() on post-processing, if no errors occurred.
Get DB session from environ and call rollback() on post-processing, if some errors occurred.
Step 1 is obvious for me:
class DbSessionMiddleware:
def __init__(self, app):
self.app = app
def __call__(self, environ, start_response):
environ['db_session'] = create_session()
return self.app(environ, start_response)
Step 2 and 3 - not. I found the example of post-processing task:
class Caseless:
def __init__(self, app):
self.app = app
def __call__(self, environ, start_response):
for chunk in self.app(environ, start_response):
yield chunk.lower()
It contains comment:
Note that the __call__ function is a Python generator, which is typical for this sort of “post-processing” task.
Could you please clarify how does it work and how can I solve my issue similarly.
Thanks,
Boris.
For step 1 I use SQLAlchemy scoped sessions:
engine = create_engine(settings.DB_URL, echo=settings.DEBUG, client_encoding='utf8')
Base = declarative_base()
sm = sessionmaker(bind=engine)
get_session = scoped_session(sm)
They return the same thread-local session for each get_session() call.
Step 2 and 3 for now is following:
class DbSessionMiddleware:
def __init__(self, app):
self.app = app
def __call__(self, environ, start_response):
try:
db.get_session().begin_nested()
return self.app(environ, start_response)
except BaseException:
db.get_session().rollback()
raise
finally:
db.get_session().commit()
As you can see, I start nested transaction on session to be able to rollback even queries that were already committed in views.