Rails: Does every http request creates a new connection pool? - mysql

i am reading this article https://polycrystal.org/posts/2012-05-25-active-record-connection-pool-fairness.html and it states that every http reuest create a new connection pool. is it true??
If it is true then what if a http request creates two threads that needs to access database then will that two threads create two separate connection pool agian or they will use the connection pool created by a http request.
Thanks,

Not request, but every worker process. The whole concept of connection pooling is to eliminate the need for establishing a db connection in every request.

Related

every request from a django app increasing mysql number of connections

I have a project built using django 1.11 and i am sending a request from my admin view and it is creating a new DB connection on every request(using django development server, runserver).
But the same thing using gunicorn as server does not increase the number of connections in DB it uses same connection that was created in first request.
In my database settings CONN_MAX_AGE is set to 300 which is 5mins. I am sending second request within 5 mins, so it is supposed to use same connection that was created in first request.
Any idea why, with runserver, django is creating new DB connection on every request and not following persistent connections behavior of django ?
From the docs:
The development server creates a new thread for each request it
handles, negating the effect of persistent connections. Don’t enable
them during development.

When to call close on CloseableHttpClient instnances

Following the documentation at https://hc.apache.org/httpcomponents-client-ga/tutorial/html/connmgmt.html
2.3.4. Connection manager shutdown
When an HttpClient instance is no longer needed and is about to go out of scope it is important to shut down its connection manager to ensure that all connections kept alive by the manager get closed and system resources allocated by those connections are released.
CloseableHttpClient httpClient = <...>
httpClient.close();
My confusion is about conflating the the instance going out of scope and needing to shutdown the connection manager.
In my use case, I am using the PoolingConnection so I want to keep the connections open, but of course return them back to the pool.
In my client code I have
ResponseHandler<Integer> rh = new ResponseHandler<Integer>()
.... elided ....
CloseableHttpClient httpclient = this.httpClientBuilder.build();
Integer statusCode = httpclient.execute(httpPost, rh);
My understanding from the docs is that that the useage of ResponseHandler takes care of return the lease
When using a ResponseHandler, HttpClient will automatically take care of ensuring release of the connection back to the connection manager
You understanding is correct. One needs to shut down the connection manager and the underlying connection pool only once it is no longer needed in order to ensure immediate shutdown and dealocation of persistent connections kept alive in the pool.
ResponseHandler ensures the connection leased from the pool gets released back to the manager no matter the outcome of request execution but it is up to the manager either to close the connection or keep it alive for re-use by subsequent requests.

Node.JS + MySQL connection pooling required?

With the node-mysql module, there are two connection options - a single connection and a connection pool. What is the best way to set up a connection to a MySQL database, using a single global connection for all requests, or creating a pool of connections and taking one from the pool for each request? Or is there a better way to do this? Will I run in to problems using just a single shared connection for all requests?
Maintaining a single connection for the whole app might be a little bit tricky.
Normally, You want to open a connection to your mysql instance, and wait for it to be established.
From this point you can start using the database (maybe start a HTTP(S) server, process the requests and query the database as needed.)
The problem is when the connection gets destroyed (ex. due to a network error).
Since you're using one connection for the whole application, you must reconnect to MySQL and somehow queue all queries while the connection is being established. It's relatively hard to implement such functionality properly.
node-mysql has a built-in pooler. A pooler, creates a few connections and keeps them in a pool. Whenever you want to close a connection obtained from the pool, the pooler returns it to the pool instead of actually closing it. Connections on the pool can be reused on next open calls.
IMO using a connection pool, is obviously simpler and shouldn't affect the performance much.

MySQL - Persistent connection vs connection pooling

In order to avoid the overhead of establishing a new connection each time a query needs fired against MySQL, there are two options available:
Persistent connections, whereby a new connection is requested a check is made to see if an 'identical' connection is already open and if so use it.
Connection pooling, whereby the client maintains a pool of connections, so that each thread that needs to use a connection will check out one from the pool and return it back to the pool when done.
So, if I have a multi-threaded server application expected to handle thousands of requests per second, and each thread needs to fire a query against the database, then what is a better option?
From my understanding, With persistent connections, all the threads in my application will try and use the same persistent connection to the database because they all are using identical connections. So it is one connection shared across multiple application threads - as a result the requests will block on the database side soon.
If I use a connection pooling mechanism, I will have all application threads share a pool of connections. So there is less possibility of a blocking request. However, with connection pooling, should an application thread wait to acquire a connection from the pool or should it send a request on the connections in the pool anyway in a round-robin manner, and let the queuing if any, happen on the database?
Having persistent connections does not imply that all threads use the same connection. It just "says" that you keep the connection open (in contradiction to open a connection each time you need one). Opening a connection is an expensive operation, so - in general - you try to avoid opening connections more often than necessary.
This is the reason why multithreaded applications often use connection pools. The pool takes care of opening and closing connections and every thread that needs a connection requests one from the pool. It is important to take care that the thread returns the connection as soon as possible to the pool, so that another thread can use it.
If your application has only a few long running threads that need connections you can also open a connection for each thread and keep this open.
Using just one connection (as you described it) is equal to a connection pool with the maximum size one. This will be sooner or later your bottleneck as all threads will have to wait for the connection. This could be an option to serialize the database operations (perform them in a certain order), although there are better options to ensure serialisation.
Update: The newer X Protocol supports asynchronous connections, and newer drivers like Node's can utilize this.
Regarding your question about should the application server wait for a connection, the answer is yes.
MySQL connections are blocking. When you issue a request from MySQL server over a connection, the connection will wait, idle, until a response is received from the server.
There is no way to send two requests on the same connection and see which returns first. You can only send one request at a time.
So, generally, a single thread in a connection pool consists of one client side connection (in your case, the application server is the client) and one server side connection (database).
Your application should wait for an available connection thread from the pool, allowing the pool to grow when it's needed, and to shrink back to your default number of threads, when it's less busy.

Does Rails create any connection pools to mysql? Is it a single threaded design?

How are connections to mysql handled in Rails 3?
Do multiple connections to the website share the same mysql connection, or does it take a connection from a connection pool and then release it once the request has closed all connections to mysql?
If there are 10 front end servers all hitting a single db server, are there any issues here?
I' using Phusion passenger if that effects anything.
The doc answers by itself:
A connection pool synchronizes thread access to a limited number of
database connections. The basic idea is that each thread checks out a
database connection from the pool, uses that connection, and checks
the connection back in. ConnectionPool is completely thread-safe, and
will ensure that a connection cannot be used by two threads at the
same time, as long as ConnectionPool’s contract is correctly followed.
It will also handle cases in which there are more threads than
connections: if all connections have been checked out, and a thread
tries to checkout a connection anyway, then ConnectionPool will wait
until some other thread has checked in a connection.