I am currently using Flask-uWSGI-Websockets to provide websocket functionality for my application. I use Flask-SQLAlchemy to connect to my MySQL database.
Flask-uWSGI-Websockets uses gevent to manage websocket connections.
The problem I am currently having is that when a websocket connection is ended, the database connection set up by Flask-SQLAlchemy will keep on living.
I have tried calling db.session.close() and db.engine.dispose() after every websocket connection, but this had no effect.
Calling gevent.monkey.patch_all() at the beginning of my app does not make a difference.
A simple representation of what I am doing is this:
from gevent.monkey import patch_all
patch_all()
from flask import Flask
from flask_uwsgi_websocket import GeventWebSocket
from flask_sqlalchemy import SQLAlchemy
app = Flask()
ws = GeventWebSocket()
db = SQLAlchemy()
db.init_app(app)
ws.init_app(app)
#ws.route('/ws')
def websocket(client):
""" handle messages """
while client.connected is True:
msg = client.recv()
# do some db stuff with the message
# The following part is executed when the connection is broken,
# i tried this for removing the connection, but the actual
# connection will stay open (i can see this on the mysql server).
db.session.close()
db.engine.dispose()
I have same situation. and solution for me located in mysql configuration file my.cnf:
[mysqld]
interactive_timeout=180
wait_timeout=180
you must restart mysql service after save my.cnf.
if you don't want to restart mysql service you can use sql queries:
SET GLOBAL interactive_timeout = 180;
SET GLOBAL wait_timeout = 180;
See also wait_timeout and interactive_timeout on mysql.com
Related
I'm making a Flask application that is using sqlalchemy as a layer between the application and a postgres database. Currently I'm using a 'config.py' file that fetches the sensible connection info from system variables. But my IT admin says it's not sufficiently safe as we will be hosting the server ourselves rather than using PAAS. What would be the most smooth and efficient way to provide the db connetion to sqalchemy without exposing the sensitive connection info to anybody that have access to the server (and thereby being able to read the system variables)?
I'm using VisualStudio as IDE, so dev environment is windows, but would like to be able to deploy on linux if needed.
This is my 'runserver.py' file:
...
from config import DevelopmentConfig, ProductionConfig, TestingConfig
app = create_app(ProductionConfig)
if __name__ == '__main__':
HOST = environ.get('SERVER_HOST', 'localhost')
try:
PORT = int(environ.get('SERVER_PORT', '6388'))
except ValueError:
PORT = 6388
app.run(HOST, PORT)
And this is my '__init__.py' file:
def create_app(config=DevelopmentConfig):
app = Flask(__name__)
app.config.from_object(config)
db.init_app(app)
...
I'm newbie running a Flask app connected to a MySQL remote server with Flask-SQLAlchemy.
The app has very little traffic and it's usual to stay idle for more than 8 hours, I then get disconnected from the MySQL server.
this is my app code:
from flask import Flask, render_template, request, redirect, jsonify, make_response
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy import or_
app = Flask(__name__)
app.config['DEBUG'] = True
app.config['SQLALCHEMY_ENGINE_OPTIONS'] = {'pool_pre_ping': True, 'pool_recycle': 300, 'echo':'debug',}
app.config['SQLALCHEMY_DATABASE_URI']='mysql+pymysql://user:pass#myipcode/db'
db = SQLAlchemy(app)
Everything works ok until no querys are performed for 8 hours, then I lose db connection and logs show this error code:
"MySQL server has gone away (%r)" % (e,)) sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2006, "MySQL server has gone away (BrokenPipeError(32, 'Broken pipe'))
I did some research and was adviced to set 'SQLALCHEMY_ENGINE_OPTIONS' as written in the code sample but it behaves the same with or without such engine options, connection is not recycled every 300 seconds as It should, and pool_pre_ping doesn't seem to make any difference. 'echo':'debug' option works as intended since I get every transaction logged.
What should I do to prevent the connection from being disconnected even after a long period of inactivty?
EDIT:
To add some additional info:
The database is hosted in Cloud SQL from GCP .
I'm lost...
Any help would be greatly appreciated.
Finally I figured it out myself,
It had to do with the fact that my app was running on a mountpoint like http://ServerIP/app instead of http://ServerIP/ because it was initially intended as a staging server.
I was using uWSGI and, in order to make it work in the aforementioned path, I had to specify a mount parameter in the [UWSGI] block from app.ini file.
When the uWSGI server started it showed like it was mounting two apps, one in '' and the other in '/app', and I guess that created a conflict that made the app unable to manage connections on MySQL server.
Mounting the app in http://ServerIP/ worked like a charm.
I have SQLAlchemy CORE 1.0.9 with Pyramid framework 1.7. And I am using the following configuration for connecting to a postgres 9.4 database:
# file __ini__.py
from .factories import root_factory
from pyramid.config import Configurator
from sqlalchemy import engine_from_config
def main(global_config, **settings):
""" This function returns a Pyramid WSGI application."""
config = Configurator(settings=settings, root_factory=root_factory)
engine = engine_from_config(settings, prefix='sqlalchemy.')
# Retrieves database connection
def get_db(request):
connection = engine.connect()
def disconnect(request):
connection.close()
request.add_finished_callback(disconnect)
return connection
config.add_request_method(get_db, 'db', reify=True)
config.scan()
return config.make_wsgi_app()
After a few hours using the app I start getting the following error:
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) FATAL: remaining connection slots are reserved for non-replication superuser connections
Apparently I have reached the maximum number of connections. It seems like connections.close() doesn't really close the connection, just returns the connection to the pool. I know I could use the NullPool to disable pooling but probably that will have a huge impact in the performance.
Does somebody know the right way to configure SQLAlchemy Core to get a good performance and close the connections properly?
Please abstain from sending links to pyramid tutorials. I am NOT interested in SQLAlchemy ORM setups. Only SQLAlchemy Core please.
Actually everything was fine in the previous setup. The problem was caused by Celery workers not closing connections.
I've inherited an application making use of python & sqlalchemy to interact with a mysql database. When I issue:
mysql_engine = sqlalchemy.create_engine('mysql://uname:pwd#192.168.xx.xx:3306/testdb', connect_args={'use_unicode':True,'charset':'utf8', 'init_command':'SET NAMES UTF8'}, poolclass=NullPool)
, at startup, an exception is thrown:
cmd = unicode("USE testdb")
with mysql_engine.begin() as conn:
conn.execute(cmd)
sqlalchemy.exc.OperationalError: (OperationalError) (2003, "Can't connect to MySQL server on '192.168.xx.xx' (101)") None None
However, using IDLE I can do:
>>> import MySQLdb
>>> Con = MySQLdb.Connect(host="192.168.xx.xx", port=3306, user="uname", passwd="pwd", db="testdb")
>>> Cursor = Con.cursor()
>>> sql = "USE testdb"
>>> Cursor.execute(sql)
The application at this point defaults to using an onboard sqlite database. After this I can quite happily switch to the MySQL database using the create_engine statement above. However, on reboot the MySQL database connection will fail again, defaulting to the onboard sqlite db, etc, etc.
Has anyone got any suggestions as to how this could be happening?
Just thought I would update this - the problem still occurs exactly as described above. I've updated the app so that the user can manually connect to the MySQL db by selecting a menu option. This calls the identical code which exceptions when the app is starting, but works just fine once the app is up and running.
The MySQL instance is completely separate from the app and running throughout, so it should be available to receive connections at all times.
I guess the fundamental question i'm grappling with is how can the same connect code work when the app is up and running, but throw an exception when it is starting?
Is there any artifact of SQLAlchemy that can cause it to fail to create usable connections that isn't dependant on the connection parameters or the remote database?
Ahhh, it all seems so obvious now...
The reason for the exception on startup was because the network interface hadn't finished configuring when the application would make its first request to the remote database. (Which is why the same thing would be successful when attempted at a later time).
As communication with the remote database is a prerequisite for the application, I now do something like this:
if grep -Fxq "mysql" /path/to/my/db/config.config
then
while ! ip a | grep inet.*wlan0 ; do sleep 1; echo "waiting for network..."; done;
fi
... in the startup script for my application - ensuring that the network interface has finished configuring before the application can run.
Of course, the application will never run if the interface doesn't configure, so it still needs some finessing to allow it to timeout and default to using a local database...
Bottom line first: How do you refresh the MySQL connection in django?
Following a MySQL server has gone away error I found that MySQL documentation and other sources (here) suggest increasing the wait_timeout MySQL parameter. To me this seems like a workaround rather than a solution. I'd rather keep a reasonable wait_timeout and refresh the connection in the code.
The error:
File "C:\my_proj\db_conduit.py", line 147, in load_some_model
SomeModel.objects.update()
File "C:\Python26\lib\site-packages\django-1.3-py2.6.egg\django\db\models\manager.py", line 177, in update
return self.get_query_set().update(*args, **kwargs)
File "C:\Python26\lib\site-packages\django-1.3-py2.6.egg\django\db\models\query.py", line 469, in update
transaction.commit(using=self.db)
File "C:\Python26\lib\site-packages\django-1.3-py2.6.egg\django\db\transaction.py", line 142, in commit
connection.commit()
File "C:\Python26\lib\site-packages\django-1.3-py2.6.egg\django\db\backends\__init__.py", line 201, in commit
self._commit()
File "C:\Python26\lib\site-packages\django-1.3-py2.6.egg\django\db\backends\__init__.py", line 46, in _commit
return self.connection.commit()
OperationalError: (2006, 'MySQL server has gone away')
Setup: Django 1.3.0 , MySQL 5.5.14 , innodb 1.1.8 , Python 2.6.6, Win7 64bit
The idea of the solution is clear: reconnect to mysql if the current connection is broken.
Please check this out:
def make_sure_mysql_usable():
from django.db import connection, connections
# mysql is lazily connected to in django.
# connection.connection is None means
# you have not connected to mysql before
if connection.connection and not connection.is_usable():
# destroy the default mysql connection
# after this line, when you use ORM methods
# django will reconnect to the default mysql
del connections._connections.default
having the same issue.
I need idea how to check connection state for MySQLdb connection in django.
i guess it can be achieved by
try:
cursor.execute(sql)
catch OperationalError:
reconnect
is anybody have a better idea?
UPDATE
my decision
self.connection.stat()
if self.connection.errno()!=0:
check state of mysqldb connection if error recreate connection
UPDATE AGAIN
you also need to serve case if connection is close
if self.connection.open:
self.connection.stat()
refresh connection is just recreating it
db_settings = settings.DATABASES['mysql_db']
try:
self.connection = MySQLdb.connect(host=db_settings['HOST'],port=int(db_settings['PORT']),db=db_settings['NAME'],user=db_settings['USER'],passwd=db_settings['PASSWORD'])
except MySQLdb.OperationalError, e:
self.connection = None
Since Django 1.6, you can use
import django.db
django.db.close_old_connections()
This does basically the same thing as adamsmith's answer except that it handles multiple databases and also honors the CONN_MAX_AGE setting. Django calls close_old_connections() automatically before and after each request, so you normally don't have to worry about it unless you have some long-running code outside of the normal request/response cycle.
The main reason that leads to this exception is mostly due to client ideal longer than wait_timeout on mysql server.
In order to prevent that kind of error, django supports an option named CONN_MAX_AGE which allow django to recreate new connection if old connections are ideal too long.
So you should make sure that CONN_MAX_AGE value is smaller than wait_timout value.
One important thing is that, django with wsgi handles checking CONN_MAX_AGE every requests by calling close_old_connections. So you mainly don't need to care about that. However if you are using django in standard alone application, there is no trigger to run that function. So you have to call it manually. So let call close_old_connections in your code base.
Note: close_old_connections will keep old connections if they're not expired yet. Your connections are still reused in case of high frequency query.
This way can also close the idle connections and make things good.
So before you need to make a query after a long time, running the below lines will work:
from django.db import close_old_connections
# To prevent the error if possible.
close_old_connections()
# Then the following sentence should be always ok.
YourModel.objects.all()