Do MySql connections closed from Jdbc stay opened for some time? - mysql

I get the following error accessing to a MySql Database from Jdbc:
java.sql.SQLNonTransientConnectionException: Too many connections
At the same time I am monitoring my connections. I added a counter that counts any opening and closing. The error ouccurs when I get to 380 opened and closed connections within 3 minutes.
Is it possible that it takes some time for MySql to acutally close the connection so that there are still too many opened even though I have send a command to close them?

I am just assuming certain points that might be the reason.
MySql Connections are maintained by MySql Connection Manager so once connection is released Manager will decide to kill that thread or return it back to pool.
In some cases if MySql Resultset is not closed after retrieving data and connection has been close on that time sending it back to pool might have some latency issue.
These two are points that i think might cause that, but i am not sure if these are correct or not.
There could be other reasons that i am not knowing about.
Hope it gives you some idea.

Related

Regarding MySQL Aborted connection

I'm looking into aborted connection -
2022-11-21T20:10:43.215738Z 640870 [Note] Aborted connection 640870 to db: '' user: '' host: '10.0.0.**' (Got timeout reading communication packets)
My understanding is that I need to figure out whether it is an interactive or not connection, and increase wait_timeout (or interactive_timeout) accordingly. If it has no effect, then I'll need to adjust net_read_timeout or net_write_timeout and see.
I'd like to ask:
Is there a meta table that I can query for the connection type
(interactive or not)?
There are how-to's on the internet on adjusting wait_timeout (or
interactive_timeout) and all of them have rebooting the database as
the last step. Is that really required? Given that immediate effect
is not required, the sessions are supposed to come and go, and new
sessions will pick up the new value (after the system value is set),
I suppose if there is a way to track how many connections are left
with the old values, then it will be ok?
Finally, can someone suggest any blog (strategy) on handling aborted
connection or adjusting the timeout values?
Thank you!
RDS MySQL version 5.7
There is only one client that sets the interactive flag by default: the mysql command-line client. All other client tools and connectors do not set this flag by default. You can choose to set the interactive flag, because it's a flag in the MySQL client API mysql_real_connect(). So you would know if you did it. In some connectors, you aren't calling the MySQL client API directly, and it isn't even an option to set this flag.
So for practical purposes, you can ignore the difference between wait_timeout and interactive_timeout, unless you're trying to tune the timeout of the mysql client in a shell window.
You should never need to restart the MySQL Server. The timeout means the client closed the session after there has been no activity for wait_timeout seconds. The default value is 28800, which is 8 hours.
The proper way of handling this in application code is to catch exceptions, reconnect if necessary, and then retry whatever query was interrupted.
Some connectors have an auto-reconnect option. Auto-reconnect does not automatically retry the query.
In many applications, you are borrowing a connection from a connection pool, and the connection pool manager is supposed to test the connection before returning it to the caller. For example running SELECT 1; is a common test. The action of testing the connection causes a reconnect if the connection was not used for 8 hours.
If you don't use a connection pool (for example if your client program is PHP, which doesn't support connection pools as far as I know), then your client opens a new connection on request, so naturally it can't be idle for 8 hours if it's a new connection. Then the connection is closed as the request finishes, and presumably this request lasts less than 8 hours.
So this comes up only if your client opens a long-lived MySQL connection that is inactive for periods of 8 hours or more. In such cases, it's your responsibility to test the connection and reopen it if necessary before running a query.

Kill multiple connections at a time

I am using root as username.
My program will refresh every 5 seconds.
What it does is, it query from mysql table and display the data.
Problem is, every after 5 seconds, the connection on mysql will append, reason that it will give an error of "TOO MUCH CONNECTIONS" when it reach the limit.
Is it possible to kill the previous connection since it is unused already?
Here is my code on opening a connection.
connectionPool = connectionPool.getConnectionPool("root", "*****", "");
This is normal behavior if you are using a connection pool. When your job is over be sure that you free the connection instance, or close all pool connections when your code execution is done.
When you are done with a connection, you need to close it. this will return a connection back to the pool.

HostGator mySQL connection time-outs after 10 seconds or so ---- how do I extend this?

for some reason when I open a connection the the Percona MySQL database on my HostGator website, after fetching the query, it will disconnect/ close the connection about 10 seconds later.
I typically wouldn't care, but HeidiSQL freezes up, preventing exporting or sorting the returned rows with it's UI unless I connect again.
Any thoughts on making the connection last longer? is it something I can do myself, or will it require a dedicated server or some upgrade? (I'm currently on a shared one). Thanks!
Sounds like it may be the "wait" timeout on the MySQL connection.
SHOW VARIABLES LIKE 'wait_timeout'
That's the amount of time (in seconds) that MySQL will leave the session (the database connection) open while it's idle, waiting for another statement to be issued. After this amount of time expires, MySQL can close the connection.
You should be able to change this for a session, to change the timeout to 5 minutes
SET wait_timeout = 300
Verify the setting with the SHOW VARIABLES statement again.
NOTE: This is per connection. It only affects the current session. Every new connection will inherit their own wait_timeout value from the global setting.
(This is only a guess. There's insufficient information in the question to make a precise diagnosis. It could be something other than MySQL server that's closing the database connection, e.g. it could be your connection pool settings (if you are using a connection pool).

MySQL giving "read ECONNRESET" error after idle time on node.js server

I'm running a Node server connecting to MySQL via the node-mysql module. Connecting to and querying MySQL works great initially without any errors, however, the first query after leaving the Node server idle for a couple hours results in an error. The error is the familiar read ECONNRESET, coming from the depths of the node-mysql module.
A stack trace (note that the three entries of the trace belong to my app's error reporting code):
Error
at exports.Error.utils.createClass.init (D:\home\site\wwwroot\errors.js:180:16)
at new newclass (D:\home\site\wwwroot\utils.js:68:14)
at Query._callback (D:\home\site\wwwroot\db.js:281:21)
at Query.Sequence.end (D:\home\site\wwwroot\node_modules\mysql\lib\protocol\sequences\Sequence.js:78:24)
at Protocol.handleNetworkError (D:\home\site\wwwroot\node_modules\mysql\lib\protocol\Protocol.js:271:14)
at PoolConnection.Connection._handleNetworkError (D:\home\site\wwwroot\node_modules\mysql\lib\Connection.js:269:18)
at Socket.EventEmitter.emit (events.js:95:17)
at net.js:441:14
at process._tickCallback (node.js:415:13)
This error happens both on my cloud Node server and MySQL server as well as a local setup of both.
My questions:
Does this problem appear to be a disconnection of Node's connection to my MySQL server(s), perhaps due to a connection lifetime limitation?
When using connection pools, node-mysql is supposed to gracefully handle disconnections and prune them from the pool. Is it not aware of the disconnect until I make a query, thus making the error unavoidable?
Considering that I see the "read ECONNRESET" error a lot in other StackOverflow posts, should I be looking elsewhere from MySQL to diagnose the problem?
Update: After more browsing, I think my issue is a duplicate of this one. It appears his connection is disconnecting as well, but no one has suggested how to keep the connection alive or how to address the error outside of failing on the first query back.
I reached out to the node-mysql folks on their Github page and got some firm answers.
MySQL does indeed prune idle connections. There's a MySQL variable "wait_timeout" that sets the number of second before timeout and the default is 8 hours. We can set the default to be much larger than that. Use show variables like 'wait_timeout'; to view your timeout setting and set wait_timeout=28800; to change it.
According to this issue, node-mysql doesn't prune pool connections after these sorts of disconnections. The module developers recommended using a heartbeat to keep the connection alive such as calling SELECT 1; on an interval. They also recommended using the node-pool module and its idleTimeoutMillis option to automatically prune idle connections.
If this happens when establishing a single reused connection, it can be avoided by establishing a connection pool instead.
For example, if you're doing something like this...
var db = require('mysql')
.createConnection({...})
.connect(function(err){});
do this instead...
var db = require('mysql')
.createPool({...});
Does this problem appear to be a disconnection of Node's connection to my MySQL server(s), perhaps due to a connection lifetime limitation?
Yes. The server has closed its end of the connection.
When using connection pools, node-mysql is supposed to gracefully handle disconnections and prune them from the pool. Is it not aware of the disconnect until I make a query, thus making the error unavoidable?
Correct, but it should handle the error internally, not pass it back to you. This appears to be a bug in node-mysql. Report it.
Considering that I see the "read ECONNRESET" error a lot in other StackOverflow posts, should I be looking elsewhere from MySQL to diagnose the problem?
It is either a bug in the node-MySQL connection pool implementation, o else you haven't configured it properly to detect failures.
I have been also facing the same issue. Apparently it was happening because one of the backend process has been triggered on table which was being referred in my api.
This caused table to go in lock wait state and my query request got failed with connection reset. Though i'm wondering why i didn't receive lock wait error .

Why does Hibernate/JDBC/MySQL drop connections after a day or so?

I have several server processes that once in a while respond to messages from the clients and perform read-only transactions.
After about a few days that the servers are running, they stop working correctly and when I check it turns out that there's a whole bunch of messages about the connection being closed.
When I checked it out, it turned out that hibernate by default works in some sort of development mode where connections are dropped after a few hours, and I started using c3po for connection pooling.
However, even with c3po, I get that problem about 24 hours or so after the servers are started.
Has anyone encountered that problem and knows how to address it? I'm not familiar enough with the intricacies of configuring hibernate.
The MySQL JDBC driver times out after 8 hours of inactivity and drops the connection.
You can set autoReconnect=true in your JDBC URL, and this causes the driver to reconnect if you try to query after it has disconnected. But this has side effects; for instance session state and transactions cannot be maintained over a new connection.
If you use autoReconnect, the JDBC connection is reestablished, but it doesn't automatically re-execute your query that got the exception. So you do need to catch SQLException in your application and retry queries.
Read http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-configuration-properties.html for more details.
MySql basically timeouts by default in 8 hours.
I got the same exception & resolved the issue after 3 hectic days.Check if you are using I hibernate3. In this version it is required to explicitly mention the connection class name. Also check if the jar is in classpath. Check steps & comments in below link
http://hibernatedb.blogspot.com/2009/05/automatic-reconnect-from-hibernate-to.html
Remove autoReconnect=true
I changed my hibernate configuration file by adding thoses lines and it works for now:
<property name="connection.autoReconnect">true</property>
<property name="connection.autoReconnectForPools">true</property>
<property name="connection.is-connection-validation-required">true</property>
I think that using c3p0 pool is better and recomanded but this solution is working for now and don't present ant problem.
I let the Tomcat On for 24hours and the connection wasn't lost .
Please try it .
I would suggest that, in almost any client/server set-up, it's a bad idea to leave connections open when they're not needed.
I'm thinking specifically about DB2/z connections but it applies equally to all servers (database and otherwise). These connections consume resources at the server that could be best utilized elsewhere.
If you were to hold connections open in a corporate environment where tens of thousand of clients connect to the database, you would probably even bring a mainframe to its knees.
I'm all for the idea of connection pooling but not so much for the idea of trying to hold individual sessions open for ever.
My advice would be as follows:
1/ Have three sorts of connections in your connection pool:
closed (so not actually in your pool).
ready, meaning open but not in use by a client.
active, meaning in use by a client.
2/ Have your connection pooling maintain a small number of ready connections, minimum of N and maximum of M. N can be adjusted depending on the peak speed at which your clients request connections. If the number of ready connections ever drops to zero, you need a bigger N.
3/ When a client wants a connection, give them one of the ready ones (making it active), then immediately open a new one if there's now less than N ready (but don't make the client wait for this to complete, or you'll lose the advantage of pooling). This ensures there will always be at least N ready connections. If none are ready when the client wants one, they will have to wait around while you create a new one.
4/ When the client finishes with an active connection, return it to the ready state if there's less than M ready connections. Otherwise close it. This prevents you from having more than M ready connections.
5/ Periodically recycle the ready connections to prevent stale connections. If there's more than N ready connections, just close the oldest connection. Otherwise close it and re-open another.
This has the advantage of having enough ready AND youthful connections available in your connection pool without overloading the server.