Correct usage sqlalchemy pools with pgbouncer - sqlalchemy

there!
I use PgBouncer with Sqlalchemy for getting connections.
For my understanding, I would like to clarify the following points:
I use the sqlalchemy default pool (QueuePool) and take connections from my pgbouncer. After the transaction is completed, is the connection returned to the "lazy" connection storage on my side (sqlalchemy) or directly to the pgbouncer pool?
If I use QueuePool with pool_size=5 parameters and create an engine for pgbouncer pool, does pgbouncer allocate these 5 connections at once, or are the connections given on demand?
If I remove the connection pooling (using NullPool) and also create an engine for pgbouncer, does this mean that after exiting the transaction context, the connection is closed and a new one will be created already inside pgbouncer at the next request?
Which of these approaches is more correct in the context of using sqlalchemy + pgbouncer?
With connections I work like this:
async with async_session() as connect:
yield connect
await connect.commit()

I think you don't need in-app connection pool if you use pgbouncer, you can either use NullPool+pgbouncer of QueuePool + use_lifo=True flag to use LIFO Queue which should reuse old connections first. I am not entirely sure if using both pgbouncer + in-app connection pool would be beneficial or harmful though.
Here's a relevant documentation link: https://docs.sqlalchemy.org/en/20/core/pooling.html
Also for automatic transaction commit you could use sessionmaker.begin():
async with async_sessinomaker.begin() as session:
...

Related

Return connection to knex db pool

I'm using knex version 3.10.10, in my node app, connecting to MySQL DB.
My configuration of knex in the app is using the pool option configuration.
1) Is there a need to EXPLICITLY return a connection to the pool after I fired a query? If yes - how
2) Is there a need to EXPLICITLY perform a check on a pool's connection, before firing the query?
Thanks in advance
No. There is no need to do either.
Knex handles a connection pool for you. You can adjust the pool size if you need to by using the setting: pool: { min: 0, max: 7 } within your connection setup, and the documentation also includes a link to the library that Knex uses for pool handling if you care about the gory details.
The knex documentation has a little info on this here: link
Each connection will be used by Knex for the duration of a query or a transaction, then released back to the pool.
BUT, if you implement transactions (i.e. multiple SQL statements to be saved or cancelled as a unit) without using Promises, then you will need to explicitly commit/rollback the transaction to properly complete the transaction, which will also release the connection back to the pool when the transaction is complete. (see more on Knex Transactions: here).
There is no such info in the documentation but based on the source code you can access knex pool like this
const knex = require('knex')(config);
const pool = knex.client.pool;
console.log(pool);
knex uses tarn pool under the hood, so you can check out it's methods there.
P.S. I don't know where did you get that knex version (3 point something) but the current version of it on this answer moment is 0.14.4

How to get number of unused/used connection in nodejs mysql connection pool?

I am using nodejs connection pooling, with npm's "mysql" module.
While creating a pool I have specified the connectionLimit as 100.
I would like to know how many of my connections are used/unused from the pool at runtime.
By looking at the source code here, it appears that you can look at:
pool.config.connectionLimit // passed in max size of the pool
pool._freeConnections.length // number of free connections awaiting use
pool._allConnections.length // number of connections currently created, including ones in use
pool._acquiringConnections.length // number of connections in the process of being acquired
Note: New connections are created as needed up to the max size of the pool so _freeConnections.length could be zero, but there are many more connections in the limit so the next time .getConnection() is called, it will create a new connection.

MySQL connection pool on Nodejs

If Node is single-threaded, what is the advantage of using a pool to connect with MySQL?
If it is, when should I release a connection?
Sharing the same, persistent, connection with the whole application isn't enough?
Nodejs is single threaded, right. But it is also async, meaning that the single thread fires multiple sql queries without waiting for the result. The result is only processed via callbacks. Therefore it makes sense to use a connection pool with more than one connection. The database is likely multi-threaded, which makes it possible to parallelize the queries, although they were fired consecutively. There is no guarantee however in which order the results are processed if you don't take extra care for that.
Addendum about connection release
If you use a connection pool, than you should aquire/release each connection from the pool for each query. There is no big overhead here, since the pool manages the underlying connections.
Get connection from pool
Query
In the callback release connection back to the pool.

How to Prevent "MySql has gone away" when using TIdHTTPServer

I have written a web server using Delphi and the Indy TIdHttpServer component. I am managing a pool of TAdoConnection connections to a MySql database. When a request comes in I query my pool for available database connections. If one is not available then a new TAdoConnection is created and added to the pool.
Problems occur when a connection becomes "stale" (i.e. it has not been used in quite some time). I think in this instance the query results in the "MySql has gone away" error.
Does anyone have a method for getting around this? Or would I have manage it myself by one of the following:
Writing a thread that will periodically "refresh" all connections.
Keeping track of the last active query, and if too old pass up using the connection and instead free it.
Two suggestions:
store a 'last used' time stamp with every pooled connection, and if a connection is requested check if the connection is too old - in this case, create a new one
add a validateObject() method which issues a no-op SQL query to detect if the connection is still healthy
a background thread which cleans up the pool in regular intervals: removing idle connections allows to reduce the pool size back to a minimum after peak usage
For some suggestions, see this article about the Apache Commons Pool Framework: http://www.javaworld.com/article/2071834/build-ci-sdlc/pool-resources-using-apache-s-commons-pool-framework.html

Session management with sqlalchemy and pyro

I'm actually using SQLAlchemy with MySQL and Pyro to make a server program. Many clients connect to this server to make requests. The programs only provides the information from the database MySQL and sometimes make some calculations.
Is it better to create a session for each client or to use the same session for every clients?
What you want is a scoped_session.
The benefits are (compared to a single shared session between clients):
No locking needed
Transactions supported
Connection pool to database (implicit done by SQLAlchemy)
How to use it
You just create the scoped_session:
Session = scoped_session(some_factory)
and access it in your Pyro methods:
class MyPyroObject():
def remote_method(self):
Session.query(MyModel).filter...
Behind the scenes
The code above guarantees that the Session is created and closed as needed. The session object is created as soon as you access it the first time in a thread and will be removed/closed after the thread is finished (ref). As each Pyro client connection has its own thread on the default setting (don't change it!), you will have one session per client.
The best I can try is to create new Session in every client's request. I hope there is no penalty in the performance.