I am using a dropwizard server to serve http requests. This dropwizard application is backed my mysql server for data storage. But when left idle (overnight) it gives a 'broken pipe exception'
I did a few things that I thought might help. I set the jdbc url in the yaml file to'autoConnect=true'. I also added a 'checkOnBorrow' property. I have increased the jvm to use 4gb
none of these fixes worked.
Also the wait_timeout and 'interactive_timeout for mysql serveris set to 8 hours.
does this need to more more/less?
Also is there a configuration property that can be set in the dropwizard yaml file? Or in other words how is connection pooling managed in dropwizard?
The problem:
MySql server has a timeout configured after which it terminates all connections idle in the connection pool. This in my case was the default (8 hrs). However the database connection pool is unaware of the terminated connections in the pool. So when a new request comes in, a dead connection is accessed from teh connection pool which results in a 'Broken Pipe' exception.
Solution:
So to fix this, we need to get rid of the dead connections and make the pool aware if the connection it is trying to borrow is a dead connection. This can be achieved by setting the following in the .yml configuration.
checkOnReturn: true
checkWhileIdle: true
checkOnBorrow: true
Related
I'm looking into aborted connection -
2022-11-21T20:10:43.215738Z 640870 [Note] Aborted connection 640870 to db: '' user: '' host: '10.0.0.**' (Got timeout reading communication packets)
My understanding is that I need to figure out whether it is an interactive or not connection, and increase wait_timeout (or interactive_timeout) accordingly. If it has no effect, then I'll need to adjust net_read_timeout or net_write_timeout and see.
I'd like to ask:
Is there a meta table that I can query for the connection type
(interactive or not)?
There are how-to's on the internet on adjusting wait_timeout (or
interactive_timeout) and all of them have rebooting the database as
the last step. Is that really required? Given that immediate effect
is not required, the sessions are supposed to come and go, and new
sessions will pick up the new value (after the system value is set),
I suppose if there is a way to track how many connections are left
with the old values, then it will be ok?
Finally, can someone suggest any blog (strategy) on handling aborted
connection or adjusting the timeout values?
Thank you!
RDS MySQL version 5.7
There is only one client that sets the interactive flag by default: the mysql command-line client. All other client tools and connectors do not set this flag by default. You can choose to set the interactive flag, because it's a flag in the MySQL client API mysql_real_connect(). So you would know if you did it. In some connectors, you aren't calling the MySQL client API directly, and it isn't even an option to set this flag.
So for practical purposes, you can ignore the difference between wait_timeout and interactive_timeout, unless you're trying to tune the timeout of the mysql client in a shell window.
You should never need to restart the MySQL Server. The timeout means the client closed the session after there has been no activity for wait_timeout seconds. The default value is 28800, which is 8 hours.
The proper way of handling this in application code is to catch exceptions, reconnect if necessary, and then retry whatever query was interrupted.
Some connectors have an auto-reconnect option. Auto-reconnect does not automatically retry the query.
In many applications, you are borrowing a connection from a connection pool, and the connection pool manager is supposed to test the connection before returning it to the caller. For example running SELECT 1; is a common test. The action of testing the connection causes a reconnect if the connection was not used for 8 hours.
If you don't use a connection pool (for example if your client program is PHP, which doesn't support connection pools as far as I know), then your client opens a new connection on request, so naturally it can't be idle for 8 hours if it's a new connection. Then the connection is closed as the request finishes, and presumably this request lasts less than 8 hours.
So this comes up only if your client opens a long-lived MySQL connection that is inactive for periods of 8 hours or more. In such cases, it's your responsibility to test the connection and reopen it if necessary before running a query.
I have a Java EE application running in GlassFish on EC2, with a MySQL database on Amazon RDS.
I am trying to configure the JDBC connection pool to in order to minimize downtime in case of database failover.
My current configuration isn't working correctly during a Multi-AZ failover, as the standby database instance appears to be available in a couple of minutes (according to the AWS console) while my GlassFish instance remains stuck for a long time (about 15 minutes) before resuming work.
The connection pool is configured like this:
asadmin create-jdbc-connection-pool --restype javax.sql.ConnectionPoolDataSource \
--datasourceclassname com.mysql.jdbc.jdbc2.optional.MysqlConnectionPoolDataSource \
--isconnectvalidatereq=true --validateatmostonceperiod=60 --validationmethod=auto-commit \
--property user=$DBUSER:password=$DBPASS:databaseName=$DBNAME:serverName=$DBHOST:port=$DBPORT \
MyPool
If I use a Single-AZ db.m1.small instance and reboot the database from the console, GlassFish will invalidate the broken connections, throw some exceptions and then reconnect as soon the database is available. In this setup I get less than 1 minute of downtime.
If I use a Multi-AZ db.m1.small instance and reboot with failover from the AWS console, I see no exception at all. The server halts completely, with all incoming requests timing out. After 15 minutes I finally get this:
Communication failure detected when attempting to perform read query outside of a transaction. Attempting to retry query. Error was: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.3.2.v20111125-r10461): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet successfully received from the server was 940,715 milliseconds ago. The last packet sent successfully to the server was 935,598 milliseconds ago.
It appears as if each HTTP thread gets blocked on an invalid connection without getting an exception and so there's no chance to perform connection validation.
Downtime in the Multi-AZ case is always between 15-16 minutes, so it looks like a timeout of some sort but I was unable to change it.
Things I have tried without success:
connection leak timeout/reclaim
statement leak timeout/reclaim
statement timeout
using a different validation method
using MysqlDataSource instead of MysqlConnectionPoolDataSource
How can I set a timeout on stuck queries so that connections in the pool are reused, validated and replaced?
Or how can I let GlassFish detect a database failover?
As I commented before, it is because the sockets that are open and connected to the database don't realize the connection has been lost, so they stayed connected until the OS socket timeout is triggered, which I read might be usually in about 30 minutes.
To solve the issue you need to override the socket Timeout in your JDBC Connection String or in the JDNI COnnection Configuration/Properties to define the socketTimeout param to a smaller time.
Keep in mind that any connection longer than the value defined will be killed, even if it is being used (I haven't been able to confirm this, is what I read).
The other two parameters I mention in my comment are connectTimeout and autoReconnect.
Here's my JDBC Connection String:
jdbc:(...)&connectTimeout=15000&socketTimeout=60000&autoReconnect=true
I also disabled Java's DNS cache by doing
java.security.Security.setProperty("networkaddress.cache.ttl" , "0");
java.security.Security.setProperty("networkaddress.cache.negative.ttl" , "0");
I do this because Java doesn't honor the TTL's, and when the failover takes place, the DNS is the same but the IP changes.
Since you are using an Application Server, the parameters to disable DNS cache must be passed to the JVM when starting the glassfish with -Dnet and not the application itself.
I'm using Play Framework 2.2.1, MySQL 5.5 and sorm 0.3.10
Since MySQL drops inactive connections after specified idle timeout, I'm getting this exception in my app:
[CommunicationsException: Communications link failure The last packet successfully received from the server was 162 701 milliseconds ago. The last packet sent successfully to the server was 0 milliseconds ago.]
As far as I understand, sorm is using c3p0 connection pool. Is it possible to configure somehow c3p0 or sorm to kick mysql with specified delay or reconnect automatically after connection was dropped?
0.3.13-SNAPSHOT of SORM introduces a timeout parameter for Instance with a default setting of 30. This setting determines the amount of seconds the underlying connections are allowed to be idle. When the timeout is reached a sort of a "keepalive" request is sent to the db and the timer is reset. The timer gets reset when any normal query is made as well. The implementation simply relies on the idleConnectionTestPeriod of C3P0.
For further discussion, suggestions and reports please visit the associated ticket on the issue tracker or open another one. If there'll be no complaints in the associated ticket, this change will make it into the 0.3.13 release.
it's very easy to resolve this issue with c3p0, but i'd double check whether you are using it. BoneCP is the default play2 Connection pool. it would be easy to solve this problem with BoneCP too!
in c3p0, config params maxIdleTime, maxConnectionAge, or (much better yet) a Connection testing regime, would help. see http://www.mchange.com/projects/c3p0/#configuring_connection_testing
if you want to use c3p0 in play2, see https://github.com/swaldman/c3p0-play
With the node-mysql module, there are two connection options - a single connection and a connection pool. What is the best way to set up a connection to a MySQL database, using a single global connection for all requests, or creating a pool of connections and taking one from the pool for each request? Or is there a better way to do this? Will I run in to problems using just a single shared connection for all requests?
Maintaining a single connection for the whole app might be a little bit tricky.
Normally, You want to open a connection to your mysql instance, and wait for it to be established.
From this point you can start using the database (maybe start a HTTP(S) server, process the requests and query the database as needed.)
The problem is when the connection gets destroyed (ex. due to a network error).
Since you're using one connection for the whole application, you must reconnect to MySQL and somehow queue all queries while the connection is being established. It's relatively hard to implement such functionality properly.
node-mysql has a built-in pooler. A pooler, creates a few connections and keeps them in a pool. Whenever you want to close a connection obtained from the pool, the pooler returns it to the pool instead of actually closing it. Connections on the pool can be reused on next open calls.
IMO using a connection pool, is obviously simpler and shouldn't affect the performance much.
I recently switched from Apache DBCP connection pooling to C3P0 and have gone through my logs to see that there are connection timeout issues. I haven't had this in the past with DBCP and Tomcat, so I'm wondering if it is a configuration issue or driver issue.
Whenever I load a page after the server has been idle for a while, I'll see that some content is not sent (as the server cannot get a connection or something). When I refresh the page, all of the content is there.
Does anyone recommend using the MySQL connection pool since I'm using MySQL anyway? What are your experiences with the MySQL Connection Pool?
Walter
If the database you're working with is configured to timeout connections after a certain time of inactivity, they are already closed and thus unusable when they are borrowed from the pool.
If you cannot or do not want to reconfigure your database server, you can configure C3P0 (and most other connection pools) to test the connections with a test query when they are borrowed from the pool. You can find more detailed information in the relevant section of the C3P0 documentation.
Edit: Of course you're right, it's also possible that there was a maximum idle time configured in the DBCP pool, causing them to be removed from the pool before they would time out. Anyway, using either a test query or making sure the connections are removed from the pool before they time out should fix the problem.
Just adding a link to another connection pool; BoneCP (http://jolbox.com); the connection pool that is faster than both C3P0 as well as DBCP.
Like C3P0 and DBCP, make sure you configure idle connection testing to avoid the scenario you described (probably MySQL's wait_timeout setting is kicking in, normally set to 8 hours).