mysql too many connections - too many sleeping threads? - mysql

I am getting this error from a mysql database
error connecting: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
I have a website and console application talking to the database but i certainly dont have 100 concurrent connections! I did a showprocesslist query and found about 10 connections with a command of SLEEP and time of about 16000 seconds. There wasn't 100 connections.
I am using subsonic data provider to talk to the database and i believe this closes database connections immediately and doesnt leave them hanging so this isn't the culprit.
I restarted mysql server and my console application that talks to the database and it seems to be working ok but naturally I can't have the either console/website application crashing like this. Looking at the error log this error seems to be coming up
Please could you advice on anything I can do to find out what is going on and how i can fix it
EDIT: I have looked into this more and it appears it is subsonic/mysql issue. I have tried the recommended fix in the link below by closing the connection in a finally block but nothing closes the connection...
Dim sp As StoredProcedure = SPs.GetLastGPSDataForAllVehicles(customerID)
Dim reader As IDataReader
Try
reader = sp.GetReader
MyBase.Load(reader)
Catch ex As Exception
Finally
reader.Dispose()
reader = Nothing
sp.Command.ToDbCommand().Connection.Close()
End Try
I have no idea how to force the connection to close.
thanks a lot

Related

Mysql Too many connections: Django sqlAlchemy

I have Django RestFrameWork application in which we are using sqlalchemy library for MySql connection.
engine = create_engine('mysql+mysqldb://username:password#hostaddress/'
'DBname', pool_recycle=1800,
connect_args={'connect_timeout': 1800}, pool_size=10, max_overflow=10, pool_pre_ping=True)
connection = engine.connect()
As the API usage increases the Mysql is creating new connections and count of threads_connected keeps growing. After reaching max value it is throwing Too many connections error. In show processList many process will be in sleep mode. If we restart the app all the connections will be reset. The following chart indicates no.of connections v/s time. How to fix this issue.
You must close connections after you've finished using them because if you don't, the connection stays open until the webserver closes it which might take a lot of time.
The best practice would be using a connection pool. Because opening and closing connections are too heavy and decreases performance. Even if you're using a connection pool you must let the connection go after you've used it.

Kill multiple connections at a time

I am using root as username.
My program will refresh every 5 seconds.
What it does is, it query from mysql table and display the data.
Problem is, every after 5 seconds, the connection on mysql will append, reason that it will give an error of "TOO MUCH CONNECTIONS" when it reach the limit.
Is it possible to kill the previous connection since it is unused already?
Here is my code on opening a connection.
connectionPool = connectionPool.getConnectionPool("root", "*****", "");
This is normal behavior if you are using a connection pool. When your job is over be sure that you free the connection instance, or close all pool connections when your code execution is done.
When you are done with a connection, you need to close it. this will return a connection back to the pool.

Increase the amount of connections in my server MySQL

I have aplications that connect to a remote server (MySQL 5.5 on Windows Server 2012), at first I started receiving "too many connections" message which I solved by increasing MAX_CONNECTION value in my.inf to 500, then I start getting "can't create new thread" message so I decrease decrease timeouts to avoid idle connections using a socket, which didn't completely work. Now I get odd messages like 'file not found', as soon as I restart the service I stop getting the messages and everything works correctly.
The problem occurs when the server reaches around 170 connections at the same time.
Is there some configuration I'm missing?, I really don't know what info you need to give me a hint to fix this. I mean, there are servers that accept a lot morw of connections at the same time, right? waht I'm missing.
RAM and CPU of the system dosen't reach 35-40% at max connections (170).
Edit: Error occur at 2 'places', when running a query or at the attempt of conennection, it's like the MySQL service rejects the attempt. VB6 is the language used in the client app (ODBC connector). The app opens, executes and closes the connection.
Note: I have full control over client app and server config.

Detect when DB server goes down during JDBC query

My application makes queries to MySQL using JDBC. Sometimes, while a query is running, connectivity will be lost to the server. Rather than detecting this and throwing an exception, the code hangs until the TCP connection finally times out (which takes over 10 minutes)
Setting a query timeout doesn't work. If the DB server stays up this will timeout queries, but does nothing if the server goes down before the timeout triggers.
Setting socketTimeout in the MySQL connection string or invoking .withNetworkTimeout on the Connection object sort of works. This does force the connection to timeout if no response is received after the specified timeout, however it will also kill queries that run longer than the timeout even if the DB server is up. I want to die fast if the DB server goes down but still be able to run long queries.
If I could get at the sockets Keepalive settings so I could set the interval/number of probes lower and that would solve the problem, but I can't see anyway to do that with the MySQL JDBC driver.
How can I cause queries to fail quickly when the DB server goes down, while still being able to run long queries?

MySQL giving "read ECONNRESET" error after idle time on node.js server

I'm running a Node server connecting to MySQL via the node-mysql module. Connecting to and querying MySQL works great initially without any errors, however, the first query after leaving the Node server idle for a couple hours results in an error. The error is the familiar read ECONNRESET, coming from the depths of the node-mysql module.
A stack trace (note that the three entries of the trace belong to my app's error reporting code):
Error
at exports.Error.utils.createClass.init (D:\home\site\wwwroot\errors.js:180:16)
at new newclass (D:\home\site\wwwroot\utils.js:68:14)
at Query._callback (D:\home\site\wwwroot\db.js:281:21)
at Query.Sequence.end (D:\home\site\wwwroot\node_modules\mysql\lib\protocol\sequences\Sequence.js:78:24)
at Protocol.handleNetworkError (D:\home\site\wwwroot\node_modules\mysql\lib\protocol\Protocol.js:271:14)
at PoolConnection.Connection._handleNetworkError (D:\home\site\wwwroot\node_modules\mysql\lib\Connection.js:269:18)
at Socket.EventEmitter.emit (events.js:95:17)
at net.js:441:14
at process._tickCallback (node.js:415:13)
This error happens both on my cloud Node server and MySQL server as well as a local setup of both.
My questions:
Does this problem appear to be a disconnection of Node's connection to my MySQL server(s), perhaps due to a connection lifetime limitation?
When using connection pools, node-mysql is supposed to gracefully handle disconnections and prune them from the pool. Is it not aware of the disconnect until I make a query, thus making the error unavoidable?
Considering that I see the "read ECONNRESET" error a lot in other StackOverflow posts, should I be looking elsewhere from MySQL to diagnose the problem?
Update: After more browsing, I think my issue is a duplicate of this one. It appears his connection is disconnecting as well, but no one has suggested how to keep the connection alive or how to address the error outside of failing on the first query back.
I reached out to the node-mysql folks on their Github page and got some firm answers.
MySQL does indeed prune idle connections. There's a MySQL variable "wait_timeout" that sets the number of second before timeout and the default is 8 hours. We can set the default to be much larger than that. Use show variables like 'wait_timeout'; to view your timeout setting and set wait_timeout=28800; to change it.
According to this issue, node-mysql doesn't prune pool connections after these sorts of disconnections. The module developers recommended using a heartbeat to keep the connection alive such as calling SELECT 1; on an interval. They also recommended using the node-pool module and its idleTimeoutMillis option to automatically prune idle connections.
If this happens when establishing a single reused connection, it can be avoided by establishing a connection pool instead.
For example, if you're doing something like this...
var db = require('mysql')
.createConnection({...})
.connect(function(err){});
do this instead...
var db = require('mysql')
.createPool({...});
Does this problem appear to be a disconnection of Node's connection to my MySQL server(s), perhaps due to a connection lifetime limitation?
Yes. The server has closed its end of the connection.
When using connection pools, node-mysql is supposed to gracefully handle disconnections and prune them from the pool. Is it not aware of the disconnect until I make a query, thus making the error unavoidable?
Correct, but it should handle the error internally, not pass it back to you. This appears to be a bug in node-mysql. Report it.
Considering that I see the "read ECONNRESET" error a lot in other StackOverflow posts, should I be looking elsewhere from MySQL to diagnose the problem?
It is either a bug in the node-MySQL connection pool implementation, o else you haven't configured it properly to detect failures.
I have been also facing the same issue. Apparently it was happening because one of the backend process has been triggered on table which was being referred in my api.
This caused table to go in lock wait state and my query request got failed with connection reset. Though i'm wondering why i didn't receive lock wait error .