MySQL connection pool on Nodejs - mysql

If Node is single-threaded, what is the advantage of using a pool to connect with MySQL?
If it is, when should I release a connection?
Sharing the same, persistent, connection with the whole application isn't enough?

Nodejs is single threaded, right. But it is also async, meaning that the single thread fires multiple sql queries without waiting for the result. The result is only processed via callbacks. Therefore it makes sense to use a connection pool with more than one connection. The database is likely multi-threaded, which makes it possible to parallelize the queries, although they were fired consecutively. There is no guarantee however in which order the results are processed if you don't take extra care for that.
Addendum about connection release
If you use a connection pool, than you should aquire/release each connection from the pool for each query. There is no big overhead here, since the pool manages the underlying connections.
Get connection from pool
Query
In the callback release connection back to the pool.

Related

Unbuffered result set in MySQL golang driver

I have a large query and I want to process the result row-by-row using the Go MySQL driver. The query is very simple but it returns a huge number of rows.
I have seen mysql_use_result() vs mysqli_store_result() at the C-API level. Is there an equivalent way to do an unbuffered query over a TCP connection, such as is used by the Go MySQL driver?
This concept of buffered/unbuffered queries in database client libraries is a bit misleading, because actually, buffering may occur on multiple levels. In general (i.e. not Go specific, and not MySQL specific), you have different kinds of buffers.
TCP socket buffers. The kernel associates a communication buffer to each socket. By default, the size of this buffer is dynamic and controlled by kernel parameters. Some clients can change those defaults to get more control and optimize. Purpose if this buffer is to regulate the traffic in the device queues and eventually, decrease the number of packets on the network.
Communication buffers. Database oriented protocols are generally based on a framing protocol, meaning that frames are defined to separate the logical packets in the TCP stream. Socket buffers do not guarantee that a complete logical packet (a frame) is available for reading. Extra communication buffers are therefore required to make sure the frames are complete when they are processed. They can also help to reduce the number of system calls. These buffers are managed by the low-level communication mechanism of the database client library.
Rows buffers. Some database clients choose to keep all the rows read from the server in memory, and let the application code browse the corresponding data structures. For instance, the PostgreSQL C client (libpq) does it. The MySQL C client leaves the choice to the developer (by calling mysql_use_result or mysql_store_result).
Anyway, the Go driver you mention is not based on the MySQL C client (it is a pure Go driver). It uses only the two first kinds of buffers (sockets, and communication buffers). Row level buffering is not provided.
There is one communication buffer per MySQL connection. Its size is a multiple of 4 KB. It will grow dynamically if the frames are large. In the MySQL protocol, each row is sent as a separate packet (in a frame), so the size of the communication buffer is directly linked to the largest rows received/sent by the client.
The consequence is you can run a query returning a huge number of rows without saturating the memory, and still having good socket performance. With this driver, buffering is never a problem for the developer, whatever the query.

How to Prevent "MySql has gone away" when using TIdHTTPServer

I have written a web server using Delphi and the Indy TIdHttpServer component. I am managing a pool of TAdoConnection connections to a MySql database. When a request comes in I query my pool for available database connections. If one is not available then a new TAdoConnection is created and added to the pool.
Problems occur when a connection becomes "stale" (i.e. it has not been used in quite some time). I think in this instance the query results in the "MySql has gone away" error.
Does anyone have a method for getting around this? Or would I have manage it myself by one of the following:
Writing a thread that will periodically "refresh" all connections.
Keeping track of the last active query, and if too old pass up using the connection and instead free it.
Two suggestions:
store a 'last used' time stamp with every pooled connection, and if a connection is requested check if the connection is too old - in this case, create a new one
add a validateObject() method which issues a no-op SQL query to detect if the connection is still healthy
a background thread which cleans up the pool in regular intervals: removing idle connections allows to reduce the pool size back to a minimum after peak usage
For some suggestions, see this article about the Apache Commons Pool Framework: http://www.javaworld.com/article/2071834/build-ci-sdlc/pool-resources-using-apache-s-commons-pool-framework.html

Producer Consumer setup: How to handle Database Connections?

I'm building my first single-producer/single-consumer app in which the consumer takes items off the queue and stores them in a MySQL database.
Previously, when it was a single thread app, I would open a connection to the DB, send the query, close the connection, and repeat every time new info came in.
With a producer-consumer setup, what is the better way to handle the DB connection? Should I open it once before starting the consumer loop (I can't see a problem with this, but I'm sure that one of you fine folks will point it out if there is one)? Or should I open and close the DB connection on each iteration of the loop (seems like a waste of time and resources)?
This software runs on approximately 30 small linux computers and all of them talk to the same database. I don't see 30 simultaneous connections being an issue, but I'd love to hear your thoughts.
Apologies if this has been covered, I couldn't find it anywhere. If it has, a link would be fantastic. Thanks!
EDIT FOR CLARITY
My main focus here is the speed of the consumer thread. The whole reason for switching from single- to multi-threaded was because the single-threaded version was missing incoming information because it was busy trying to connect to the database. Given that the producer thread is expected to start dumping info into the buffer at quite a high rate, and given that the buffer will be limited in size, it is very important that the consumer work through the buffer as quickly as possible while remaining stable.
Your MySQL shouldn't have any problems handling connections in the hundreds, if not thousands.
On each of your consumers you should set up a connection pool use that from your consumer. If you consume the messages in a single thread (per application) the pool only needs to use one connection but it's also fine to consume and start parallel threads that all use one connection.
The reason for using a connection pool is that it will handle re connection and keep alive for you. Just ask it for one connection and have it promise that it will work (it does this by running a small query against the database). If you don't use a connection for a while and it get's terminated the pool will just create a new one.

ColdFusion Query Connections

When you run a basic ColdFusion query, when does ColdFusion actually log out of the database? When does the query actually close? My understanding is that when you have multiple users being authenticated at the same time, it maintains it's connection and uses a new thread for a new user. But I am struggling to find any documentation as to when it actually closes. Is it when the page is finished rendering or is it directly after the query execution?
Any help on this matter would be greatly appreciated. We are running ColdFusion 9 Standard with SQL Server 2008.
My understanding it that, by default, ColdFusion won't log out of the database at a particular time. It uses a connection pool, so when you make a query, coldfusion takes a connection from it's pool of connections (creating a connection if none were present), executes the query, then hands the connection back to the pool, ready for more requests. Connections will eventually be closed when they've been inactive for long enough (20 minutes by default, set by the Timeout setting in ColdFusion DataSource admin).
I think the strict answer to your question is: 20 minutes since the last use of that connection, but that's hard to determine

reconnecting to mysql after connection time-out

As a standard procedure, MySql connection is lost after stated number of hours (default:8) of inactivity. To reconnect to the mysql server after identifying such connection lost I simply do connection = DriverManager.getConnection(url, user, password);
I am not using connection pool and as this trick has not been mentioned in previous connection lost related posts, that makes me wonder if my code will generate any side-effects later? (I say so because testing this code after above instance, I found the sessionlistner is not called after session.invalidate() call.)
You'll loose temporary tables and session settings if your connection drops. It sounds like a connection pool would be useful in your situation.
Depending on how you handle connection object(s), it can create small client-side memory leak for connection object that was lost. But probably this effect will be so small that you will never see any problems from it.
To minimize this risk, you can do something as simple as SELECT 1 every few minutes from your connection during idle time, such that your connection is still considered active (unless your client dies off completely).