gevent + concurrent.futures and SQLAlchemy - sqlalchemy

I'm running and pyramid app inside gunicorn container with gevent async workers,
one of endpoints is a long-pool endpoint pooling AMQP via kombu.
If the long pool withing 30s timeout returns some data from AMQP I need to save it to postgres before returning, now the question is:
Is it OK to start concurrent.futures.ThreadPoolExecutor in context on app running in gevent loop, and deal with SQLAlchemy sessions and data persistence inside a future submitted to executor?
Or am I completely wrong in my way of thinking?
PS DB driver is psycopg2

After using ThreadPoolExecutor with gevent in my high-concurrent app I can confirm that this works fine.
But unnecessary if the DB Driver is already gevent "friendly" like: psycogreen

Related

How to protect db when using npm mysql library?

If there are many requests of db server at the same time saying that the QPS is 100, and the DB server has a connection limit saing 1000, so if the requests are slow queries which will eventually got inactivity timeout, at this time what shoud i do to prevent the npm package mysql from creating new connection?
Because the npm package mysql will remove the connection object from the connection object pool with fatal error like inactivity timeout and leave space for creating new connection.
For high load, you should use connection pools with persistent connections. Those are usually available in hight level query builders and ORMs like knex and sequelize.
But if you don't want use them, you can also try native pools.

every request from a django app increasing mysql number of connections

I have a project built using django 1.11 and i am sending a request from my admin view and it is creating a new DB connection on every request(using django development server, runserver).
But the same thing using gunicorn as server does not increase the number of connections in DB it uses same connection that was created in first request.
In my database settings CONN_MAX_AGE is set to 300 which is 5mins. I am sending second request within 5 mins, so it is supposed to use same connection that was created in first request.
Any idea why, with runserver, django is creating new DB connection on every request and not following persistent connections behavior of django ?
From the docs:
The development server creates a new thread for each request it
handles, negating the effect of persistent connections. Don’t enable
them during development.

SQLAlchemy core + Pyramid not closing connections

I have SQLAlchemy CORE 1.0.9 with Pyramid framework 1.7. And I am using the following configuration for connecting to a postgres 9.4 database:
# file __ini__.py
from .factories import root_factory
from pyramid.config import Configurator
from sqlalchemy import engine_from_config
def main(global_config, **settings):
""" This function returns a Pyramid WSGI application."""
config = Configurator(settings=settings, root_factory=root_factory)
engine = engine_from_config(settings, prefix='sqlalchemy.')
# Retrieves database connection
def get_db(request):
connection = engine.connect()
def disconnect(request):
connection.close()
request.add_finished_callback(disconnect)
return connection
config.add_request_method(get_db, 'db', reify=True)
config.scan()
return config.make_wsgi_app()
After a few hours using the app I start getting the following error:
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) FATAL: remaining connection slots are reserved for non-replication superuser connections
Apparently I have reached the maximum number of connections. It seems like connections.close() doesn't really close the connection, just returns the connection to the pool. I know I could use the NullPool to disable pooling but probably that will have a huge impact in the performance.
Does somebody know the right way to configure SQLAlchemy Core to get a good performance and close the connections properly?
Please abstain from sending links to pyramid tutorials. I am NOT interested in SQLAlchemy ORM setups. Only SQLAlchemy Core please.
Actually everything was fine in the previous setup. The problem was caused by Celery workers not closing connections.

Django: Pooling MySQL DB Connections

I have a Django App with a pretty standard server stack
DB Backend : MySQL
WSGI Server : Gunicorn
Async worker class : Gevent
I want Django to pool MySQL connections rather than creating connections on every request.
Starting 1.6, Django has introduced persistent connections but there are issues with async workers.
Hence, either a different MySQL backend is required or app level connection pooling. I've read several of them. Some of them are very old articles. Following are some:
Django MySQL backends
django-mysqlpool
App level Connection pool
with SQL Alchemy
another with SQL Alchemy
Some Patches are also available
Django Patch
Some other approaches
MySQL DB Connector
I'm really confused as to which approach among these is the best way to pool connections? Any Help is highly appreciated.
This project still works on Django 1.9, and worked well for us.
https://github.com/djangonauts/djorm-ext-pool
your demand
want pool MySQL connections rather than creating connections on
every request.
my suggest
in db level
Indicating that your application is IO-intensive, so the proposal
is to use mysql conn pool. may be u can use thirdpart mysql pool
in app level
in app level no use connection pooling. But mostly use the cache
,may be redis cache etc,this can minus the connection number.
in webserver level
in your server socalled WSGI Server . It is ligntweight so not
pooling implement,u can refact to use queue to enhance the connection
reused. or base Gevent to refact event_queue.
Hope this may can give you some help.

create_engine problems mysql 5.7 and sqlalchemy

I've inherited an application making use of python & sqlalchemy to interact with a mysql database. When I issue:
mysql_engine = sqlalchemy.create_engine('mysql://uname:pwd#192.168.xx.xx:3306/testdb', connect_args={'use_unicode':True,'charset':'utf8', 'init_command':'SET NAMES UTF8'}, poolclass=NullPool)
, at startup, an exception is thrown:
cmd = unicode("USE testdb")
with mysql_engine.begin() as conn:
conn.execute(cmd)
sqlalchemy.exc.OperationalError: (OperationalError) (2003, "Can't connect to MySQL server on '192.168.xx.xx' (101)") None None
However, using IDLE I can do:
>>> import MySQLdb
>>> Con = MySQLdb.Connect(host="192.168.xx.xx", port=3306, user="uname", passwd="pwd", db="testdb")
>>> Cursor = Con.cursor()
>>> sql = "USE testdb"
>>> Cursor.execute(sql)
The application at this point defaults to using an onboard sqlite database. After this I can quite happily switch to the MySQL database using the create_engine statement above. However, on reboot the MySQL database connection will fail again, defaulting to the onboard sqlite db, etc, etc.
Has anyone got any suggestions as to how this could be happening?
Just thought I would update this - the problem still occurs exactly as described above. I've updated the app so that the user can manually connect to the MySQL db by selecting a menu option. This calls the identical code which exceptions when the app is starting, but works just fine once the app is up and running.
The MySQL instance is completely separate from the app and running throughout, so it should be available to receive connections at all times.
I guess the fundamental question i'm grappling with is how can the same connect code work when the app is up and running, but throw an exception when it is starting?
Is there any artifact of SQLAlchemy that can cause it to fail to create usable connections that isn't dependant on the connection parameters or the remote database?
Ahhh, it all seems so obvious now...
The reason for the exception on startup was because the network interface hadn't finished configuring when the application would make its first request to the remote database. (Which is why the same thing would be successful when attempted at a later time).
As communication with the remote database is a prerequisite for the application, I now do something like this:
if grep -Fxq "mysql" /path/to/my/db/config.config
then
while ! ip a | grep inet.*wlan0 ; do sleep 1; echo "waiting for network..."; done;
fi
... in the startup script for my application - ensuring that the network interface has finished configuring before the application can run.
Of course, the application will never run if the interface doesn't configure, so it still needs some finessing to allow it to timeout and default to using a local database...