Hibernate, C3P0, Mysql Connection Pooling - mysql

I recently switched from Apache DBCP connection pooling to C3P0 and have gone through my logs to see that there are connection timeout issues. I haven't had this in the past with DBCP and Tomcat, so I'm wondering if it is a configuration issue or driver issue.
Whenever I load a page after the server has been idle for a while, I'll see that some content is not sent (as the server cannot get a connection or something). When I refresh the page, all of the content is there.
Does anyone recommend using the MySQL connection pool since I'm using MySQL anyway? What are your experiences with the MySQL Connection Pool?
Walter

If the database you're working with is configured to timeout connections after a certain time of inactivity, they are already closed and thus unusable when they are borrowed from the pool.
If you cannot or do not want to reconfigure your database server, you can configure C3P0 (and most other connection pools) to test the connections with a test query when they are borrowed from the pool. You can find more detailed information in the relevant section of the C3P0 documentation.
Edit: Of course you're right, it's also possible that there was a maximum idle time configured in the DBCP pool, causing them to be removed from the pool before they would time out. Anyway, using either a test query or making sure the connections are removed from the pool before they time out should fix the problem.

Just adding a link to another connection pool; BoneCP (http://jolbox.com); the connection pool that is faster than both C3P0 as well as DBCP.
Like C3P0 and DBCP, make sure you configure idle connection testing to avoid the scenario you described (probably MySQL's wait_timeout setting is kicking in, normally set to 8 hours).

Related

Broken Pipe exception on idle server

I am using a dropwizard server to serve http requests. This dropwizard application is backed my mysql server for data storage. But when left idle (overnight) it gives a 'broken pipe exception'
I did a few things that I thought might help. I set the jdbc url in the yaml file to'autoConnect=true'. I also added a 'checkOnBorrow' property. I have increased the jvm to use 4gb
none of these fixes worked.
Also the wait_timeout and 'interactive_timeout for mysql serveris set to 8 hours.
does this need to more more/less?
Also is there a configuration property that can be set in the dropwizard yaml file? Or in other words how is connection pooling managed in dropwizard?
The problem:
MySql server has a timeout configured after which it terminates all connections idle in the connection pool. This in my case was the default (8 hrs). However the database connection pool is unaware of the terminated connections in the pool. So when a new request comes in, a dead connection is accessed from teh connection pool which results in a 'Broken Pipe' exception.
Solution:
So to fix this, we need to get rid of the dead connections and make the pool aware if the connection it is trying to borrow is a dead connection. This can be achieved by setting the following in the .yml configuration.
checkOnReturn: true
checkWhileIdle: true
checkOnBorrow: true

SORM vs MySQL idle connection

I'm using Play Framework 2.2.1, MySQL 5.5 and sorm 0.3.10
Since MySQL drops inactive connections after specified idle timeout, I'm getting this exception in my app:
[CommunicationsException: Communications link failure The last packet successfully received from the server was 162 701 milliseconds ago. The last packet sent successfully to the server was 0 milliseconds ago.]
As far as I understand, sorm is using c3p0 connection pool. Is it possible to configure somehow c3p0 or sorm to kick mysql with specified delay or reconnect automatically after connection was dropped?
0.3.13-SNAPSHOT of SORM introduces a timeout parameter for Instance with a default setting of 30. This setting determines the amount of seconds the underlying connections are allowed to be idle. When the timeout is reached a sort of a "keepalive" request is sent to the db and the timer is reset. The timer gets reset when any normal query is made as well. The implementation simply relies on the idleConnectionTestPeriod of C3P0.
For further discussion, suggestions and reports please visit the associated ticket on the issue tracker or open another one. If there'll be no complaints in the associated ticket, this change will make it into the 0.3.13 release.
it's very easy to resolve this issue with c3p0, but i'd double check whether you are using it. BoneCP is the default play2 Connection pool. it would be easy to solve this problem with BoneCP too!
in c3p0, config params maxIdleTime, maxConnectionAge, or (much better yet) a Connection testing regime, would help. see http://www.mchange.com/projects/c3p0/#configuring_connection_testing
if you want to use c3p0 in play2, see https://github.com/swaldman/c3p0-play

Node.JS + MySQL connection pooling required?

With the node-mysql module, there are two connection options - a single connection and a connection pool. What is the best way to set up a connection to a MySQL database, using a single global connection for all requests, or creating a pool of connections and taking one from the pool for each request? Or is there a better way to do this? Will I run in to problems using just a single shared connection for all requests?
Maintaining a single connection for the whole app might be a little bit tricky.
Normally, You want to open a connection to your mysql instance, and wait for it to be established.
From this point you can start using the database (maybe start a HTTP(S) server, process the requests and query the database as needed.)
The problem is when the connection gets destroyed (ex. due to a network error).
Since you're using one connection for the whole application, you must reconnect to MySQL and somehow queue all queries while the connection is being established. It's relatively hard to implement such functionality properly.
node-mysql has a built-in pooler. A pooler, creates a few connections and keeps them in a pool. Whenever you want to close a connection obtained from the pool, the pooler returns it to the pool instead of actually closing it. Connections on the pool can be reused on next open calls.
IMO using a connection pool, is obviously simpler and shouldn't affect the performance much.

How Hibernate pooling works if pooling size is less than concurrent connections?

I am using Hibernate with c3p0 as pooling provider. I have set its max size as 50 only. Now, I performed load testing of my application with 1000 concurrent threads accessing database continuously and with mysql max_connections as 2000. I am getting proper responses from the application but sometimes I face socket exception error.
So, first thing is if my pooling size is 50 only, how 1000 connections are managed by hibernate ? Does it mean that 50 connections are being taken from the pool and rest of the connections are created? Also, why I must be getting socket exception like connection reset exception?
if you've set things up properly and c3p0's maxPoolSize is 50, then if 1000 clients hit the pool, 50 will get Connections initially and the rest will wait() briefly until Connections are returned by the first cohort. the pool's job, in collaboration with your application which should hold Connections as briefly as possible, is to ensure that a limited number of Connections are efficiently shared.
if you are seeing occasional connection reset / socket exception, you probably ought to configure some Connection testing:
http://www.mchange.com/projects/c3p0/index.html#configuring_connection_testing
the latest prerelease version has some more direct advice about connection testing; you can download that or read the html source starting here:
https://github.com/swaldman/c3p0/blob/master/src/doc/index.html#L1071

Does the setting time_out in mysql affect JDBC connection pool?

It's usually set to 60 seconds.
But I think con pool will hold the connection, so is there any conflict? Thank you.
Here is the answer from Glassfish Wiki:
idle-timeout-in-seconds
maximum time in seconds, that a connection can remain idle in
the pool. After this time, the pool implementation can close
this connection. ====>Note that this does not control connection
timeouts enforced at the database server side<====. Adminsitrators
are advised to keep this timeout shorter than the EIS
connection timeout (if such timeouts are configured on the
specific EIS), to prevent accumulation of unusable connection
in Application Server.