wildfly datasource pool configuration allocation-retry - configuration

I have some related questions about the datasource pool configuration for wildfly.
Is it correct that the default value for <allocation-retry> is 0? Does it mean that any code that wants a new connection will immediately fail if the pool does not have any free connections left? And the request is not queued?
I'd like to enable queueing for maybe 30 seconds. Which configuration is better and why?
<allocation-retry>1</allocation-retry><allocation-retry-wait-millis>30000</allocation-retry-wait-millis>
<allocation-retry>30</allocation-retry><allocation-retry-wait-millis>1000</allocation-retry-wait-millis>

Related

Broken Pipe exception on idle server

I am using a dropwizard server to serve http requests. This dropwizard application is backed my mysql server for data storage. But when left idle (overnight) it gives a 'broken pipe exception'
I did a few things that I thought might help. I set the jdbc url in the yaml file to'autoConnect=true'. I also added a 'checkOnBorrow' property. I have increased the jvm to use 4gb
none of these fixes worked.
Also the wait_timeout and 'interactive_timeout for mysql serveris set to 8 hours.
does this need to more more/less?
Also is there a configuration property that can be set in the dropwizard yaml file? Or in other words how is connection pooling managed in dropwizard?
The problem:
MySql server has a timeout configured after which it terminates all connections idle in the connection pool. This in my case was the default (8 hrs). However the database connection pool is unaware of the terminated connections in the pool. So when a new request comes in, a dead connection is accessed from teh connection pool which results in a 'Broken Pipe' exception.
Solution:
So to fix this, we need to get rid of the dead connections and make the pool aware if the connection it is trying to borrow is a dead connection. This can be achieved by setting the following in the .yml configuration.
checkOnReturn: true
checkWhileIdle: true
checkOnBorrow: true

SORM vs MySQL idle connection

I'm using Play Framework 2.2.1, MySQL 5.5 and sorm 0.3.10
Since MySQL drops inactive connections after specified idle timeout, I'm getting this exception in my app:
[CommunicationsException: Communications link failure The last packet successfully received from the server was 162 701 milliseconds ago. The last packet sent successfully to the server was 0 milliseconds ago.]
As far as I understand, sorm is using c3p0 connection pool. Is it possible to configure somehow c3p0 or sorm to kick mysql with specified delay or reconnect automatically after connection was dropped?
0.3.13-SNAPSHOT of SORM introduces a timeout parameter for Instance with a default setting of 30. This setting determines the amount of seconds the underlying connections are allowed to be idle. When the timeout is reached a sort of a "keepalive" request is sent to the db and the timer is reset. The timer gets reset when any normal query is made as well. The implementation simply relies on the idleConnectionTestPeriod of C3P0.
For further discussion, suggestions and reports please visit the associated ticket on the issue tracker or open another one. If there'll be no complaints in the associated ticket, this change will make it into the 0.3.13 release.
it's very easy to resolve this issue with c3p0, but i'd double check whether you are using it. BoneCP is the default play2 Connection pool. it would be easy to solve this problem with BoneCP too!
in c3p0, config params maxIdleTime, maxConnectionAge, or (much better yet) a Connection testing regime, would help. see http://www.mchange.com/projects/c3p0/#configuring_connection_testing
if you want to use c3p0 in play2, see https://github.com/swaldman/c3p0-play

Is there any inconvenient if I never call shutdown() method in the connection pool (BoneCP)?

I'm developing a WebApp and it can be accessed 24/7, so it doesn't really have a moment when I can say: "Finally I'm not using the connection pool anymore, I'm going to shut it down".
I've read (here at SO: BoneCP correct usage) that I should use the shutdown method if I'm sure that I'm not using connections anymore, but that's not my case.
So, is there any problem if I don't shutdown the pool?
The answer is to allow the connection pool to manage database connections. Any decent connection pool will provide some configuration options that will enable you to customize connection retention policies, min/max pool sizes, connection testing / verification, etc. . .
I looked at your link (BoneCP correct usage) and I would suggest that you configure the connection pool at the web container level as a JNDI DataSource, and not within your application. Your application would then access then access the connection pool via JNDI. There are number of benefits to this approach. Here a few:
1) your app doesn't know or care whether it's using a connection pool or a regular jdbc connection. The latter is helpful during development testing, as startup time faster and memory usage is smaller.
2) your app doesn't need to the the database connection details (e.g. jdbc url, username, and password). Allowing you to use a common WAR file for all deployments.
3) configuration and tuning of the pool can be done without need for rebuilding and redeploying your application.

How Hibernate pooling works if pooling size is less than concurrent connections?

I am using Hibernate with c3p0 as pooling provider. I have set its max size as 50 only. Now, I performed load testing of my application with 1000 concurrent threads accessing database continuously and with mysql max_connections as 2000. I am getting proper responses from the application but sometimes I face socket exception error.
So, first thing is if my pooling size is 50 only, how 1000 connections are managed by hibernate ? Does it mean that 50 connections are being taken from the pool and rest of the connections are created? Also, why I must be getting socket exception like connection reset exception?
if you've set things up properly and c3p0's maxPoolSize is 50, then if 1000 clients hit the pool, 50 will get Connections initially and the rest will wait() briefly until Connections are returned by the first cohort. the pool's job, in collaboration with your application which should hold Connections as briefly as possible, is to ensure that a limited number of Connections are efficiently shared.
if you are seeing occasional connection reset / socket exception, you probably ought to configure some Connection testing:
http://www.mchange.com/projects/c3p0/index.html#configuring_connection_testing
the latest prerelease version has some more direct advice about connection testing; you can download that or read the html source starting here:
https://github.com/swaldman/c3p0/blob/master/src/doc/index.html#L1071

Hibernate, C3P0, Mysql Connection Pooling

I recently switched from Apache DBCP connection pooling to C3P0 and have gone through my logs to see that there are connection timeout issues. I haven't had this in the past with DBCP and Tomcat, so I'm wondering if it is a configuration issue or driver issue.
Whenever I load a page after the server has been idle for a while, I'll see that some content is not sent (as the server cannot get a connection or something). When I refresh the page, all of the content is there.
Does anyone recommend using the MySQL connection pool since I'm using MySQL anyway? What are your experiences with the MySQL Connection Pool?
Walter
If the database you're working with is configured to timeout connections after a certain time of inactivity, they are already closed and thus unusable when they are borrowed from the pool.
If you cannot or do not want to reconfigure your database server, you can configure C3P0 (and most other connection pools) to test the connections with a test query when they are borrowed from the pool. You can find more detailed information in the relevant section of the C3P0 documentation.
Edit: Of course you're right, it's also possible that there was a maximum idle time configured in the DBCP pool, causing them to be removed from the pool before they would time out. Anyway, using either a test query or making sure the connections are removed from the pool before they time out should fix the problem.
Just adding a link to another connection pool; BoneCP (http://jolbox.com); the connection pool that is faster than both C3P0 as well as DBCP.
Like C3P0 and DBCP, make sure you configure idle connection testing to avoid the scenario you described (probably MySQL's wait_timeout setting is kicking in, normally set to 8 hours).