I'm using the tomcat connection pool via JNDI resources.
To avoid that the connection is lost after a long inactivity (more of 8 hours, that is the default value of MySQL variable wait_timeout), I have put validationQuery and testOnBorrow in the context.xml.
My context.xml is:
<Resource name="jdbc/mydb" auth="Container" type="javax.sql.DataSource"
removeAbandoned="true" removeAbandonedTimeout="60"
maxActive="30" maxIdle="30" maxWait="10000"
username="myuser" password="mypwd" driverClassName="com.mysql.jdbc.Driver"
url="jdbc:mysql://localhost:3306/mydb?useEncoding=true&characterEncoding=UTF-8"
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory" closeMethod="close"
validationQuery="select 1" testOnBorrow="true" />
It works, but if a I use a SSL connection, it doesn't work anymore.
I obtain:
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: No
operations allowed after connection closed.
...
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet
successfully received from the server was 328,606,914 milliseconds ago. The last
packet sent successfully to the server was 328,606,914 milliseconds ago. is longer
than the server configured value of 'wait_timeout'. You should consider either
expiring and/or testing connection validity before use in your application,
increasing the server configured values for client timeouts, or using the
Connector/J connection property 'autoReconnect=true' to avoid this problem.
I wouldn't to use autoreconnect=true, 'cause it is not recommended by MySQL team itself
What could be the issue? Why this difference between SSL and no-SSL?
EDIT
It seems to work by putting ssl=true in the query string of the connection to db:
url="jdbc:mysql://localhost:3306/mydb?useEncoding=true&characterEncoding=UTF-8&ssl=true"
Related
my application (runs on a Tomcat server) uses the atomikos connection pool to connect with a mysql database. everything works fine except that the connection will
be shutdown if leave the application server not used for some hours. below is the error message I got when operate the application server again after this happens:
:58:28 AM RusticiSoftware.ScormContentPlayer.Util.Logger LogInfo
INFO: Parsing metadata
Aug 15, 2013 9:58:28 AM RusticiSoftware.ScormContentPlayer.DataHelp.JdbcDataHelper ExecuteReturnDbRows
INFO: ExecuteReturnDbRows: failed - The last packet successfully received from the server was 59,735,409 milliseconds ago. The last packet sent successfully to the server was 59,735,409 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 59,735,409 milliseconds ago. The last packet sent successfully to the server was 59,735,409 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1121)
at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3871)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2484)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2664)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2815)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2155)
at com.mysql.jdbc.PreparedStatement.execute(PreparedStatement.java:1379)
at RusticiSoftware.ScormContentPlayer.DataHelp.JdbcDataHelper.ExecuteReturnDbRows(JdbcDataHelper.java:453)
..................................
.................................
.................................
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3852)
... 57 more
I do set the autoReconnect to true in my jndi parameters but looks it doesn't work.
<Resource name="jdbc/ScormEngineDB" auth="Container"
type="javax.sql.DataSource" driverClassName="com.mysql.jdbc.Driver"
url="jdbc:mysql://localhost:3306/wgea_scorm?charset=utf8&useUnicode=true&characterEncoding=utf-8&autoReconnect=true"
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
username="username" password="password" maxActive="20" maxIdle="10" pinGlobalTxToPhysicalConnection="true" testQuery="select 1"
maxWait="-1" />
I also set the log in the mysql side and find out that the test query (select 1) actually was not sent to mysql because the connection is closed. now I have to restart the application server every morning when the problem happens.
any ideas about this?
thanks
Finally I find out that Tomcat connection pool is used rather than Atomikos connection pool. So Tomcat connection pool parameters should be used in the JNDI configuration. It should be like:
<Resource name="jdbc/ScormEngineDB" auth="Container"
type="javax.sql.DataSource" driverClassName="com.mysql.jdbc.Driver"
url="jdbc:mysql://localhost:3306/wgea_scorm?charset=utf8&useUnicode=true&characterEncoding=utf-8&autoReconnect=true"
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
username="username" password="password" maxActive="20" maxIdle="10" autoReconnectForConnectionPools="true"
autoReconnectForPools="true" pinGlobalTxToPhysicalConnection="true"
<!-- below are Tomcat connection pool parameters-->
testOnBorrow="true" logValidationErrors="true" validationQuery="select 1" testWhileIdle="true"
testOnConnect="true" validationInterval="3000000" maxWait="-1" />
the validationInterval parameter can be set to a value that is shorter than the database connection timeout so that the connection can be kept alive.
Regards the autoConnection parameter, many say that it is not recommended so it can be removed from the above JNDI configuration. Refer http://tomcat.10.x6.nabble.com/connection-autoReconnect-td4340944.html for more information
I've come across similar questions many times here on Stack Overflow, however, I haven't been able to get it right.
I'm trying to use Database connection pooling in Tomcat 6 + MySQL ( on AWS RDS )
These are the parameters I have configured.
( I'm closing the connection from my Java code also )
<Resource name="jdbc/awsDB" auth="Container" type="javax.sql.DataSource"
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
initialSize="10"
testWhileIdle="true"
maxActive="30"
maxAge="3600"
maxIdle="5"
maxWait="3000"
removeAbandoned="true"
logAbandoned="false"
validationQuery="SELECT 1"
removeAbandonedTimeout="60"
timeBetweenEvictionRunsMillis="15000"
username="sbose78" password="XXXXX"
driverClassName="com.mysql.jdbc.Driver"
url="jdbc:mysql://xxxxrds.amazonaws.com:3306/health?autoReconnect=true" />
The application works great for the first few database queries and after that the database connection hangs indefinitely.
From the connection logs I've noticed that the connection pooling works good , however, Even when there are 10+ connections in SLEEP state, a new database query takes indefinite time to execute.
I end up re-starting the server in most cases.
What configuration mistake am I doing?
Thanks a lot in advance!
Check whether your query is using all the indexed columns in the where clause.If you are sure about the connection pool, then most probaably the problem will be with query itself.
I have a Grails application running under Tomcat 7 and am having problems managing MySQL connections.
The problem is that each new request to the application (i.e., a page load) is creating a new MySQL connection, and these connections are not closing. Instead, they stay in a SLEEP state until the MySQL server finally refuses to accept more connections. Therefore, simply by reloading individual pages on the site at a fast rate, one can create numerous database connections.
So it seems that my connection pool isn't closing the connections with MySQL fast enough. There are a number of configuration settings for the connection pool, but I'm not sure what needs to be tuned to avoid this problem.
Here is the configuration from my context.xml file:
<Resource name="jdbc/Production" auth="Container" type="javax.sql.DataSource"
maxActive="100"
maxIdle="30"
maxWait="10000"
minEvictableIdleTimeMillis="1800000"
timeBetweenEvictionRunsMillis="1800000"
numTestsPerEvictionRun="3"
removeAbandoned="true"
removeAbandonedTimeout="60"
logAbandoned="true"
testOnBorrow="true"
testWhileIdle="true"
testOnReturn="true"
validationQuery="SELECT 1"
username=""
password=""
driverClassName="com.mysql.jdbc.Driver"
url="jdbc:mysql://localhost/Production"
/>
Thanks very much for any suggestions.
You did not define a connection pool.
Add following code to the context.xml(it seems to be a JNDI data source):
pooled = "true"
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
If you haven't put JDBC Pool in to your dependency configuration, add following to the plugins closure in BuildConfig.groovy:
compile ":jdbc-pool:1.0.9.3"
You may use other connection pools, but JDBC Pool will be my recommendation.
We just migrated from dbcp to tomcat jdbc connection pooling.
We tried the system in load and received the following exception:
java.sql.SQLException: [IA1856] Timeout: Pool empty. Unable to fetch a connection in 1 seconds, none available[size:125; busy:90; idle:0; lastwait:1000].
at org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:632)
at org.apache.tomcat.jdbc.pool.ConnectionPool.getConnection(ConnectionPool.java:174)
at org.apache.tomcat.jdbc.pool.DataSourceProxy.getConnection(DataSourceProxy.java:124)
at com.inneractive.model.mappings.BasicPersistenceEntityMapping.getConnection(BasicPersistenceEntityMapping.java:233)
at com.inneractive.model.mappings.BasicPersistenceEntityMapping.callWithConnection(BasicPersistenceEntityMapping.java:243)
at com.inneractive.model.mappings.PersistenceEntityMapping.get(PersistenceEntityMapping.java:194)
at com.inneractive.model.data.client.ClientUtils.GetClientByExamples(ClientUtils.java:353)
at com.inneractive.client.ExternalAdRingsClientStart.getClientInfoByRequestParametersOrInsert(ExternalAdRingsClientStart.java:1329)
at com.inneractive.client.ExternalAdRingsClientStart.newClientSession(ExternalAdRingsClientStart.java:245)
at com.inneractive.simpleM2M.web.SimpleM2MProtocolBean.generateCampaign(SimpleM2MProtocolBean.java:235)
at com.inneractive.simpleM2M.web.SimpleM2MProtocolBean.generateCampaign(SimpleM2MProtocolBean.java:219)
at com.inneractive.simpleM2M.web.AdsServlet.doGet(AdsServlet.java:175)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:396)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Notice this:
[size:125; busy:90; idle:0; lastwait:1000]
Where are the connections that are not busy?
The busy number kept going down after this,
but we still didnt manage to get any connections.
Any ideas?
Configuration:
<Resource auth="Container" driverClassName="com.mysql.jdbc.Driver"
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory" loginTimeout="10000"
maxActive="35" maxIdle="35" maxWait="1000" name="jdbc/mysql"
password="-----" testOnBorrow="true" testOnReturn="false" type="javax.sql.DataSource"
url="jdbc:mysql://localhost:3306/my_db?elideSetAutoCommits=true&useDynamicCharsetInfo=false&rewriteBatchedStatements=true&useLocalSessionState=true&useLocalTransactionState=true&alwaysSendSetIsolation=false&cacheServerConfiguration=true&noAccessToProcedureBodies=true&useUnicode=true&characterEncoding=UTF-8"
username="root" validationQuery="SELECT 1"/>
env: ubuntu and tomcat 6. db - mysql
Taking a look at the source of ConnectionPool.java you seem to hit this code snippet in the borrowConnection() method:
//we didn't get a connection, lets see if we timed out
if (con == null) {
if ((System.currentTimeMillis() - now) >= maxWait) {
throw new SQLException("[" + Thread.currentThread().getName()+"] " +
"Timeout: Pool empty. Unable to fetch a connection in " + (maxWait / 1000) +
" seconds, none available["+busy.size()+" in use].");
} else {
//no timeout, lets try again
continue;
}
}
So according to this, your connection is Null.
The value of con is retrieved on the line:
PooledConnection con = idle.poll();
if you track the code, you will see idle is (depending on your configuration, but by default) FairBlockingQueue. You may checkout the implementation for hints.
In general you always have to close ResultSets, Statements, and Connections and used connections should be correctly released back to the pool.
Not doing so correctly may result in connections never been closed => never being available again for reuse (connection pool "leaks").
I suggest you construct some detailed logging over the state of the pool and monitor it to isolate the problem.
Some guidelines from Apache for preventing database connection pool leaks:
removeAbandoned="true"
abandoned database connections are removed and recycled
removeAbandonedTimeout="60"
set the number of seconds a database connection has been idle before it is considered abandoned
logAbandoned="true"
log a stack trace of the code which abandoned the database connection resources. Keep in mind that "logging of abandoned Connections adds overhead for every Connection borrow because a stack trace has to be generated."
I still think slightly increasing the maxWait value (1200, 1500, 1700 - just experiment, there will be no difference in the response times from user perspective) will clear those rare cases, in which you still have problems.
"where are the connections that are not busy ?"
It sounds like they've been dropped and for some reason your connection pool isn't trying to reconnect them.
Add this to the URL that you're connecting to:
autoReconnect=true
And add this as a property to the resource should cause dead connections to be reconnected automatically.
validationQuery="SELECT 1"
Also this should allow you to see connections being dropped:
logAbandoned="true"
There's multiple similar questions on stack overflow.
Tomcat connection pooling,idle connections,and connection creation
JDBC Connection pool not reopening Connections in tomcat
However it may also be that you're not releasing the connections completely which is the cause of them dying.
JDBC MySql connection pooling practices to avoid exhausted connection pool
seems to be a bug in the pool, size variable is incremented, then trying to create connection,
but if creation fails... we have size value large and no actual connections in pool - terrible :
//if we get here, see if we need to create one
//this is not 100% accurate since it doesn't use a shared
//atomic variable - a connection can become idle while we are creating
//a new connection
if (size.get() < getPoolProperties().getMaxActive()) {
//atomic duplicate check
if (size.addAndGet(1) > getPoolProperties().getMaxActive()) {
//if we got here, two threads passed through the first if
size.decrementAndGet();
} else {
//create a connection, we're below the limit
return createConnection(now, con, username, password);
}
} //end if
I'm using tomcat 7 and the tomcat jdbc connection pool to dish out mysql connections.
During night times we don't have any activity so all connections become idle for longer than 8 hours and are dropped by mysql. (mysql's wait_timeout default).
We use the following pool configuration:
<Resource name="jdbc/dbName"
auth="Container"
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
type="javax.sql.DataSource"
maxActive="50"
maxIdle="30"
maxWait="5000"
driverClassName="com.mysql.jdbc.Driver"
validationQuery="SELECT 1"
testOnBorrow="true"
testWhileIdle="true"
timeBetweenEvictionRunsMillis="10000"
removeAbandoned="true"
removeAbandonedTimeout="60"
logAbandoned="true"
username="xxx"
password="xxx"
url="jdbc:mysql://host:3306/xxx"/>
I was expecting the EvictionPolicy to remove idle connections way before they ever get closed by MySql. Somehow after one day we get the following exception:
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: No operations allowed after connection closed.Connection was implicitly closed by the driver.
I guess this problem should be something the jdbc connection pool can fix, but there are many configuration properties and I haven't used this pool before. Anybody got a good set of properties to configure the pool to not dish out closed connections?
Kind regards,
Albert
Solved it. Turned out it wasn't a pooling problem after all. We were using squeryl and lift together which isn't a happy combi (just yet). Connections got closed before returned to the pool.
Ditching lift's DB connection management in favor of squeryl's solved it.