I'm looking into aborted connection -
2022-11-21T20:10:43.215738Z 640870 [Note] Aborted connection 640870 to db: '' user: '' host: '10.0.0.**' (Got timeout reading communication packets)
My understanding is that I need to figure out whether it is an interactive or not connection, and increase wait_timeout (or interactive_timeout) accordingly. If it has no effect, then I'll need to adjust net_read_timeout or net_write_timeout and see.
I'd like to ask:
Is there a meta table that I can query for the connection type
(interactive or not)?
There are how-to's on the internet on adjusting wait_timeout (or
interactive_timeout) and all of them have rebooting the database as
the last step. Is that really required? Given that immediate effect
is not required, the sessions are supposed to come and go, and new
sessions will pick up the new value (after the system value is set),
I suppose if there is a way to track how many connections are left
with the old values, then it will be ok?
Finally, can someone suggest any blog (strategy) on handling aborted
connection or adjusting the timeout values?
Thank you!
RDS MySQL version 5.7
There is only one client that sets the interactive flag by default: the mysql command-line client. All other client tools and connectors do not set this flag by default. You can choose to set the interactive flag, because it's a flag in the MySQL client API mysql_real_connect(). So you would know if you did it. In some connectors, you aren't calling the MySQL client API directly, and it isn't even an option to set this flag.
So for practical purposes, you can ignore the difference between wait_timeout and interactive_timeout, unless you're trying to tune the timeout of the mysql client in a shell window.
You should never need to restart the MySQL Server. The timeout means the client closed the session after there has been no activity for wait_timeout seconds. The default value is 28800, which is 8 hours.
The proper way of handling this in application code is to catch exceptions, reconnect if necessary, and then retry whatever query was interrupted.
Some connectors have an auto-reconnect option. Auto-reconnect does not automatically retry the query.
In many applications, you are borrowing a connection from a connection pool, and the connection pool manager is supposed to test the connection before returning it to the caller. For example running SELECT 1; is a common test. The action of testing the connection causes a reconnect if the connection was not used for 8 hours.
If you don't use a connection pool (for example if your client program is PHP, which doesn't support connection pools as far as I know), then your client opens a new connection on request, so naturally it can't be idle for 8 hours if it's a new connection. Then the connection is closed as the request finishes, and presumably this request lasts less than 8 hours.
So this comes up only if your client opens a long-lived MySQL connection that is inactive for periods of 8 hours or more. In such cases, it's your responsibility to test the connection and reopen it if necessary before running a query.
Related
I am running two EC2 instances on AWS to serve my application one for each. Each application can open up to 100 connections to MySql for default. For database I use RDS and t2.medium instance which can handle 312 connections at a time.
In general, my connection size does not get larger than 20. When I start sending notifications to the users to come to the application, the connection size increases a lot (which is expected.). In some cases, MySql connections increases unexpectedly and my application starts to throw PoolExhaustedException:
PoolExhaustedException: [pool-7-thread-92] Timeout: Pool empty. Unable to fetch a connection in 30 seconds, none available[size:100; busy:100; idle:0; lastwait:30000].
When I check the database connections from Navicat, I see that there are about 200 connections and all of them are sleeping. I do not understand why the open connections are not used. I use standard Spring Data Jpa to save and read my entities which means I do not open or close the connections manually.
Unless I shut down one of the instance and the mysql connections are released, both instances do not response at all.
You can see the mysql connection size change graphic here, and a piece of log here.
I get the following error accessing to a MySql Database from Jdbc:
java.sql.SQLNonTransientConnectionException: Too many connections
At the same time I am monitoring my connections. I added a counter that counts any opening and closing. The error ouccurs when I get to 380 opened and closed connections within 3 minutes.
Is it possible that it takes some time for MySql to acutally close the connection so that there are still too many opened even though I have send a command to close them?
I am just assuming certain points that might be the reason.
MySql Connections are maintained by MySql Connection Manager so once connection is released Manager will decide to kill that thread or return it back to pool.
In some cases if MySql Resultset is not closed after retrieving data and connection has been close on that time sending it back to pool might have some latency issue.
These two are points that i think might cause that, but i am not sure if these are correct or not.
There could be other reasons that i am not knowing about.
Hope it gives you some idea.
for some reason when I open a connection the the Percona MySQL database on my HostGator website, after fetching the query, it will disconnect/ close the connection about 10 seconds later.
I typically wouldn't care, but HeidiSQL freezes up, preventing exporting or sorting the returned rows with it's UI unless I connect again.
Any thoughts on making the connection last longer? is it something I can do myself, or will it require a dedicated server or some upgrade? (I'm currently on a shared one). Thanks!
Sounds like it may be the "wait" timeout on the MySQL connection.
SHOW VARIABLES LIKE 'wait_timeout'
That's the amount of time (in seconds) that MySQL will leave the session (the database connection) open while it's idle, waiting for another statement to be issued. After this amount of time expires, MySQL can close the connection.
You should be able to change this for a session, to change the timeout to 5 minutes
SET wait_timeout = 300
Verify the setting with the SHOW VARIABLES statement again.
NOTE: This is per connection. It only affects the current session. Every new connection will inherit their own wait_timeout value from the global setting.
(This is only a guess. There's insufficient information in the question to make a precise diagnosis. It could be something other than MySQL server that's closing the database connection, e.g. it could be your connection pool settings (if you are using a connection pool).
I'm using Play Framework 2.2.1, MySQL 5.5 and sorm 0.3.10
Since MySQL drops inactive connections after specified idle timeout, I'm getting this exception in my app:
[CommunicationsException: Communications link failure The last packet successfully received from the server was 162 701 milliseconds ago. The last packet sent successfully to the server was 0 milliseconds ago.]
As far as I understand, sorm is using c3p0 connection pool. Is it possible to configure somehow c3p0 or sorm to kick mysql with specified delay or reconnect automatically after connection was dropped?
0.3.13-SNAPSHOT of SORM introduces a timeout parameter for Instance with a default setting of 30. This setting determines the amount of seconds the underlying connections are allowed to be idle. When the timeout is reached a sort of a "keepalive" request is sent to the db and the timer is reset. The timer gets reset when any normal query is made as well. The implementation simply relies on the idleConnectionTestPeriod of C3P0.
For further discussion, suggestions and reports please visit the associated ticket on the issue tracker or open another one. If there'll be no complaints in the associated ticket, this change will make it into the 0.3.13 release.
it's very easy to resolve this issue with c3p0, but i'd double check whether you are using it. BoneCP is the default play2 Connection pool. it would be easy to solve this problem with BoneCP too!
in c3p0, config params maxIdleTime, maxConnectionAge, or (much better yet) a Connection testing regime, would help. see http://www.mchange.com/projects/c3p0/#configuring_connection_testing
if you want to use c3p0 in play2, see https://github.com/swaldman/c3p0-play
I have several server processes that once in a while respond to messages from the clients and perform read-only transactions.
After about a few days that the servers are running, they stop working correctly and when I check it turns out that there's a whole bunch of messages about the connection being closed.
When I checked it out, it turned out that hibernate by default works in some sort of development mode where connections are dropped after a few hours, and I started using c3po for connection pooling.
However, even with c3po, I get that problem about 24 hours or so after the servers are started.
Has anyone encountered that problem and knows how to address it? I'm not familiar enough with the intricacies of configuring hibernate.
The MySQL JDBC driver times out after 8 hours of inactivity and drops the connection.
You can set autoReconnect=true in your JDBC URL, and this causes the driver to reconnect if you try to query after it has disconnected. But this has side effects; for instance session state and transactions cannot be maintained over a new connection.
If you use autoReconnect, the JDBC connection is reestablished, but it doesn't automatically re-execute your query that got the exception. So you do need to catch SQLException in your application and retry queries.
Read http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-configuration-properties.html for more details.
MySql basically timeouts by default in 8 hours.
I got the same exception & resolved the issue after 3 hectic days.Check if you are using I hibernate3. In this version it is required to explicitly mention the connection class name. Also check if the jar is in classpath. Check steps & comments in below link
http://hibernatedb.blogspot.com/2009/05/automatic-reconnect-from-hibernate-to.html
Remove autoReconnect=true
I changed my hibernate configuration file by adding thoses lines and it works for now:
<property name="connection.autoReconnect">true</property>
<property name="connection.autoReconnectForPools">true</property>
<property name="connection.is-connection-validation-required">true</property>
I think that using c3p0 pool is better and recomanded but this solution is working for now and don't present ant problem.
I let the Tomcat On for 24hours and the connection wasn't lost .
Please try it .
I would suggest that, in almost any client/server set-up, it's a bad idea to leave connections open when they're not needed.
I'm thinking specifically about DB2/z connections but it applies equally to all servers (database and otherwise). These connections consume resources at the server that could be best utilized elsewhere.
If you were to hold connections open in a corporate environment where tens of thousand of clients connect to the database, you would probably even bring a mainframe to its knees.
I'm all for the idea of connection pooling but not so much for the idea of trying to hold individual sessions open for ever.
My advice would be as follows:
1/ Have three sorts of connections in your connection pool:
closed (so not actually in your pool).
ready, meaning open but not in use by a client.
active, meaning in use by a client.
2/ Have your connection pooling maintain a small number of ready connections, minimum of N and maximum of M. N can be adjusted depending on the peak speed at which your clients request connections. If the number of ready connections ever drops to zero, you need a bigger N.
3/ When a client wants a connection, give them one of the ready ones (making it active), then immediately open a new one if there's now less than N ready (but don't make the client wait for this to complete, or you'll lose the advantage of pooling). This ensures there will always be at least N ready connections. If none are ready when the client wants one, they will have to wait around while you create a new one.
4/ When the client finishes with an active connection, return it to the ready state if there's less than M ready connections. Otherwise close it. This prevents you from having more than M ready connections.
5/ Periodically recycle the ready connections to prevent stale connections. If there's more than N ready connections, just close the oldest connection. Otherwise close it and re-open another.
This has the advantage of having enough ready AND youthful connections available in your connection pool without overloading the server.