Maxscale keep connections that are killed by the application - mysql

We are trying to use maxscale for DB load balancing (MySQL 5.7).
We are using Hikari for application connection pool.
We would like that the Hikari will manage the connection pooling and maxscale will follow the connections from Hikari's side.
For the test we only have the master connected to maxscale.
Both Hikari and the application are set to 20 max connections.
On max scale we use the following configuration:
Host configuration
persistpoolmax=0
persistmaxtime=60
service configuration
max_connections=20
We also commented out the connection_timeout on the service section.
What we see, is when we stop the applications, the connections remain open on maxscale and when we restart the application, if fails to connect on max-connections exceeded.
What are we doing wrong?

Related

Unexpected MySql PoolExhaustedException

I am running two EC2 instances on AWS to serve my application one for each. Each application can open up to 100 connections to MySql for default. For database I use RDS and t2.medium instance which can handle 312 connections at a time.
In general, my connection size does not get larger than 20. When I start sending notifications to the users to come to the application, the connection size increases a lot (which is expected.). In some cases, MySql connections increases unexpectedly and my application starts to throw PoolExhaustedException:
PoolExhaustedException: [pool-7-thread-92] Timeout: Pool empty. Unable to fetch a connection in 30 seconds, none available[size:100; busy:100; idle:0; lastwait:30000].
When I check the database connections from Navicat, I see that there are about 200 connections and all of them are sleeping. I do not understand why the open connections are not used. I use standard Spring Data Jpa to save and read my entities which means I do not open or close the connections manually.
Unless I shut down one of the instance and the mysql connections are released, both instances do not response at all.
You can see the mysql connection size change graphic here, and a piece of log here.

Broken Pipe exception on idle server

I am using a dropwizard server to serve http requests. This dropwizard application is backed my mysql server for data storage. But when left idle (overnight) it gives a 'broken pipe exception'
I did a few things that I thought might help. I set the jdbc url in the yaml file to'autoConnect=true'. I also added a 'checkOnBorrow' property. I have increased the jvm to use 4gb
none of these fixes worked.
Also the wait_timeout and 'interactive_timeout for mysql serveris set to 8 hours.
does this need to more more/less?
Also is there a configuration property that can be set in the dropwizard yaml file? Or in other words how is connection pooling managed in dropwizard?
The problem:
MySql server has a timeout configured after which it terminates all connections idle in the connection pool. This in my case was the default (8 hrs). However the database connection pool is unaware of the terminated connections in the pool. So when a new request comes in, a dead connection is accessed from teh connection pool which results in a 'Broken Pipe' exception.
Solution:
So to fix this, we need to get rid of the dead connections and make the pool aware if the connection it is trying to borrow is a dead connection. This can be achieved by setting the following in the .yml configuration.
checkOnReturn: true
checkWhileIdle: true
checkOnBorrow: true

Configure GlassFish JDBC connection pool to handle Amazon RDS Multi-AZ failover

I have a Java EE application running in GlassFish on EC2, with a MySQL database on Amazon RDS.
I am trying to configure the JDBC connection pool to in order to minimize downtime in case of database failover.
My current configuration isn't working correctly during a Multi-AZ failover, as the standby database instance appears to be available in a couple of minutes (according to the AWS console) while my GlassFish instance remains stuck for a long time (about 15 minutes) before resuming work.
The connection pool is configured like this:
asadmin create-jdbc-connection-pool --restype javax.sql.ConnectionPoolDataSource \
--datasourceclassname com.mysql.jdbc.jdbc2.optional.MysqlConnectionPoolDataSource \
--isconnectvalidatereq=true --validateatmostonceperiod=60 --validationmethod=auto-commit \
--property user=$DBUSER:password=$DBPASS:databaseName=$DBNAME:serverName=$DBHOST:port=$DBPORT \
MyPool
If I use a Single-AZ db.m1.small instance and reboot the database from the console, GlassFish will invalidate the broken connections, throw some exceptions and then reconnect as soon the database is available. In this setup I get less than 1 minute of downtime.
If I use a Multi-AZ db.m1.small instance and reboot with failover from the AWS console, I see no exception at all. The server halts completely, with all incoming requests timing out. After 15 minutes I finally get this:
Communication failure detected when attempting to perform read query outside of a transaction. Attempting to retry query. Error was: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.3.2.v20111125-r10461): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet successfully received from the server was 940,715 milliseconds ago. The last packet sent successfully to the server was 935,598 milliseconds ago.
It appears as if each HTTP thread gets blocked on an invalid connection without getting an exception and so there's no chance to perform connection validation.
Downtime in the Multi-AZ case is always between 15-16 minutes, so it looks like a timeout of some sort but I was unable to change it.
Things I have tried without success:
connection leak timeout/reclaim
statement leak timeout/reclaim
statement timeout
using a different validation method
using MysqlDataSource instead of MysqlConnectionPoolDataSource
How can I set a timeout on stuck queries so that connections in the pool are reused, validated and replaced?
Or how can I let GlassFish detect a database failover?
As I commented before, it is because the sockets that are open and connected to the database don't realize the connection has been lost, so they stayed connected until the OS socket timeout is triggered, which I read might be usually in about 30 minutes.
To solve the issue you need to override the socket Timeout in your JDBC Connection String or in the JDNI COnnection Configuration/Properties to define the socketTimeout param to a smaller time.
Keep in mind that any connection longer than the value defined will be killed, even if it is being used (I haven't been able to confirm this, is what I read).
The other two parameters I mention in my comment are connectTimeout and autoReconnect.
Here's my JDBC Connection String:
jdbc:(...)&connectTimeout=15000&socketTimeout=60000&autoReconnect=true
I also disabled Java's DNS cache by doing
java.security.Security.setProperty("networkaddress.cache.ttl" , "0");
java.security.Security.setProperty("networkaddress.cache.negative.ttl" , "0");
I do this because Java doesn't honor the TTL's, and when the failover takes place, the DNS is the same but the IP changes.
Since you are using an Application Server, the parameters to disable DNS cache must be passed to the JVM when starting the glassfish with -Dnet and not the application itself.

MySQL Connection Pool for mysql client

Is there a way to have an established connection pool on the client side (running as daemon), so it can be used by the mysql client on linux?
mysql ==(named pipe/unix domain socket?)==> mysql connection pool (daemon) ==> mysql server
After reading the response from you i can propose the following solution:
to have a daemon application (resident in memory) which will accept connections from clients (via sockets or http)
the clients will sent a security token (so that they can be authorized) and the query that needs to be executed
the daemon application can have a pool of mysql connections (fixed number) and it will choose a connection (depending on the load) to exec the queries and return the result (if necessary )
This way you will have full control on the number of mysql connections also you will have a single point where the communication with the db layer is made.

Hibernate, C3P0, Mysql Connection Pooling

I recently switched from Apache DBCP connection pooling to C3P0 and have gone through my logs to see that there are connection timeout issues. I haven't had this in the past with DBCP and Tomcat, so I'm wondering if it is a configuration issue or driver issue.
Whenever I load a page after the server has been idle for a while, I'll see that some content is not sent (as the server cannot get a connection or something). When I refresh the page, all of the content is there.
Does anyone recommend using the MySQL connection pool since I'm using MySQL anyway? What are your experiences with the MySQL Connection Pool?
Walter
If the database you're working with is configured to timeout connections after a certain time of inactivity, they are already closed and thus unusable when they are borrowed from the pool.
If you cannot or do not want to reconfigure your database server, you can configure C3P0 (and most other connection pools) to test the connections with a test query when they are borrowed from the pool. You can find more detailed information in the relevant section of the C3P0 documentation.
Edit: Of course you're right, it's also possible that there was a maximum idle time configured in the DBCP pool, causing them to be removed from the pool before they would time out. Anyway, using either a test query or making sure the connections are removed from the pool before they time out should fix the problem.
Just adding a link to another connection pool; BoneCP (http://jolbox.com); the connection pool that is faster than both C3P0 as well as DBCP.
Like C3P0 and DBCP, make sure you configure idle connection testing to avoid the scenario you described (probably MySQL's wait_timeout setting is kicking in, normally set to 8 hours).