my application (runs on a Tomcat server) uses the atomikos connection pool to connect with a mysql database. everything works fine except that the connection will
be shutdown if leave the application server not used for some hours. below is the error message I got when operate the application server again after this happens:
:58:28 AM RusticiSoftware.ScormContentPlayer.Util.Logger LogInfo
INFO: Parsing metadata
Aug 15, 2013 9:58:28 AM RusticiSoftware.ScormContentPlayer.DataHelp.JdbcDataHelper ExecuteReturnDbRows
INFO: ExecuteReturnDbRows: failed - The last packet successfully received from the server was 59,735,409 milliseconds ago. The last packet sent successfully to the server was 59,735,409 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 59,735,409 milliseconds ago. The last packet sent successfully to the server was 59,735,409 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1121)
at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3871)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2484)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2664)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2815)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2155)
at com.mysql.jdbc.PreparedStatement.execute(PreparedStatement.java:1379)
at RusticiSoftware.ScormContentPlayer.DataHelp.JdbcDataHelper.ExecuteReturnDbRows(JdbcDataHelper.java:453)
..................................
.................................
.................................
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3852)
... 57 more
I do set the autoReconnect to true in my jndi parameters but looks it doesn't work.
<Resource name="jdbc/ScormEngineDB" auth="Container"
type="javax.sql.DataSource" driverClassName="com.mysql.jdbc.Driver"
url="jdbc:mysql://localhost:3306/wgea_scorm?charset=utf8&useUnicode=true&characterEncoding=utf-8&autoReconnect=true"
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
username="username" password="password" maxActive="20" maxIdle="10" pinGlobalTxToPhysicalConnection="true" testQuery="select 1"
maxWait="-1" />
I also set the log in the mysql side and find out that the test query (select 1) actually was not sent to mysql because the connection is closed. now I have to restart the application server every morning when the problem happens.
any ideas about this?
thanks
Finally I find out that Tomcat connection pool is used rather than Atomikos connection pool. So Tomcat connection pool parameters should be used in the JNDI configuration. It should be like:
<Resource name="jdbc/ScormEngineDB" auth="Container"
type="javax.sql.DataSource" driverClassName="com.mysql.jdbc.Driver"
url="jdbc:mysql://localhost:3306/wgea_scorm?charset=utf8&useUnicode=true&characterEncoding=utf-8&autoReconnect=true"
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
username="username" password="password" maxActive="20" maxIdle="10" autoReconnectForConnectionPools="true"
autoReconnectForPools="true" pinGlobalTxToPhysicalConnection="true"
<!-- below are Tomcat connection pool parameters-->
testOnBorrow="true" logValidationErrors="true" validationQuery="select 1" testWhileIdle="true"
testOnConnect="true" validationInterval="3000000" maxWait="-1" />
the validationInterval parameter can be set to a value that is shorter than the database connection timeout so that the connection can be kept alive.
Regards the autoConnection parameter, many say that it is not recommended so it can be removed from the above JNDI configuration. Refer http://tomcat.10.x6.nabble.com/connection-autoReconnect-td4340944.html for more information
Related
We're having users created under Secondary Userstore(JDBC Userstore). Similarly, we have an application called MyApplication created in API Store. When users are trying to login to that MyApplication by invoking /token API which was provided by WSO2 even with correct username (in the format of TESTDOMAIN/testuser) and password also. Sometimes login is getting failed by returning a response with 400 Bad Request:
{
"error_description": "Error when handling event : PRE_AUTHENTICATION",
"error": "invalid_grant"
}
And, in the IDM Audit.log, the error was like shown below:
WARN {AUDIT_LOG}- Initiator=wso2.system.user Action=Authentication Target=TESTDOMAIN/testuser Data=null Outcome=Failure Error={"Error Message":"Un-expected error while pre-authenticating, Error when handling event : PRE_AUTHENTICATION","Error Code":"31002"}
After 5 attempts of user login, the user is getting logged in successfully without any problem.
I'm not getting any clue and not understanding why this login failure happens randomly.
Please provide your solutions/ideas regarding this issue.
UPDATED:
After enabling user core debug logs and some other logs which seems to be relevant to this issue. During authentication failure, I could see following wso2carbon.log:
DEBUG {org.wso2.carbon.user.core.jdbc.JDBCUserStoreManager} - Error occurred while checking existence of values.
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 733,140 milliseconds ago. The last packet sent successfully to the server was 733,140 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
Caused by: java.net.SocketException: Connection reset
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:115)
... 113 more
DEBUG {org.wso2.carbon.identity.oauth2.token.AccessTokenIssuer} - Error occurred while validating grant
org.wso2.carbon.identity.oauth2.IdentityOAuth2Exception: Error when handling event : PRE_AUTHENTICATION
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 733,140 milliseconds ago. The last packet sent successfully to the server was 733,140 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
As stated by #senthalan in the comments, let's try adding "autoReconnect=true" to the end of the connection URL.
Additionally, please verify you are having the following recommended values under connection configurations for your MySQL datasources in the master-datasources.xml. (As described in [1])
<definition type="RDBMS">
<configuration>
<url>jdbc:mysql://localhost:3306/umdb?autoReconnect=true</url>
<username>regadmin</username>
<password>regadmin</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<maxActive>80</maxActive>
<maxWait>60000</maxWait>
<minIdle>5</minIdle>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
<defaultAutoCommit>false</defaultAutoCommit>
Also, we can increase the number of max_connections from the DB side as described in [2].
mysql> SET GLOBAL max_connections = 500;
Query OK, 0 rows affected (0.00 sec)
[1] https://docs.wso2.com/display/ADMIN44x/Changing+to+MySQL
[2] https://stackoverflow.com/a/19991390/2910841
Website is running at the client's server and we got the following exception...
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 45915 seconds ago.The last packet sent successfully to the server was 45915 seconds ago, which is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
at sun.reflect.GeneratedConstructorAccessor268.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:406)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1074)
at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3246)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1917)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2060)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2542)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1734)
at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:1885)
at org.hibernate.jdbc.AbstractBatcher.getResultSet(AbstractBatcher.java:186)
at org.hibernate.loader.Loader.getResultSet(Loader.java:1787)
at org.hibernate.loader.Loader.doQuery(Loader.java:674)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:236)
at org.hibernate.loader.Loader.doList(Loader.java:2220)
at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2104)
at org.hibernate.loader.Loader.list(Loader.java:2099)
at org.hibernate.loader.criteria.CriteriaLoader.list(CriteriaLoader.java:94)
at org.hibernate.impl.SessionImpl.list(SessionImpl.java:1569)
at org.hibernate.impl.CriteriaImpl.list(CriteriaImpl.java:283)
at org.hibernate.impl.CriteriaImpl.uniqueResult(CriteriaImpl.java:305)
.........
My question is how JDBC driver decide the seconds 45915? Is there any variable stored in MySQL server for connection of particular database? How to get details of last established connection to MySQL and other information related to connection.
I'm using the tomcat connection pool via JNDI resources.
To avoid that the connection is lost after a long inactivity (more of 8 hours, that is the default value of MySQL variable wait_timeout), I have put validationQuery and testOnBorrow in the context.xml.
My context.xml is:
<Resource name="jdbc/mydb" auth="Container" type="javax.sql.DataSource"
removeAbandoned="true" removeAbandonedTimeout="60"
maxActive="30" maxIdle="30" maxWait="10000"
username="myuser" password="mypwd" driverClassName="com.mysql.jdbc.Driver"
url="jdbc:mysql://localhost:3306/mydb?useEncoding=true&characterEncoding=UTF-8"
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory" closeMethod="close"
validationQuery="select 1" testOnBorrow="true" />
It works, but if a I use a SSL connection, it doesn't work anymore.
I obtain:
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: No
operations allowed after connection closed.
...
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet
successfully received from the server was 328,606,914 milliseconds ago. The last
packet sent successfully to the server was 328,606,914 milliseconds ago. is longer
than the server configured value of 'wait_timeout'. You should consider either
expiring and/or testing connection validity before use in your application,
increasing the server configured values for client timeouts, or using the
Connector/J connection property 'autoReconnect=true' to avoid this problem.
I wouldn't to use autoreconnect=true, 'cause it is not recommended by MySQL team itself
What could be the issue? Why this difference between SSL and no-SSL?
EDIT
It seems to work by putting ssl=true in the query string of the connection to db:
url="jdbc:mysql://localhost:3306/mydb?useEncoding=true&characterEncoding=UTF-8&ssl=true"
We just migrated from dbcp to tomcat jdbc connection pooling.
We tried the system in load and received the following exception:
java.sql.SQLException: [IA1856] Timeout: Pool empty. Unable to fetch a connection in 1 seconds, none available[size:125; busy:90; idle:0; lastwait:1000].
at org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:632)
at org.apache.tomcat.jdbc.pool.ConnectionPool.getConnection(ConnectionPool.java:174)
at org.apache.tomcat.jdbc.pool.DataSourceProxy.getConnection(DataSourceProxy.java:124)
at com.inneractive.model.mappings.BasicPersistenceEntityMapping.getConnection(BasicPersistenceEntityMapping.java:233)
at com.inneractive.model.mappings.BasicPersistenceEntityMapping.callWithConnection(BasicPersistenceEntityMapping.java:243)
at com.inneractive.model.mappings.PersistenceEntityMapping.get(PersistenceEntityMapping.java:194)
at com.inneractive.model.data.client.ClientUtils.GetClientByExamples(ClientUtils.java:353)
at com.inneractive.client.ExternalAdRingsClientStart.getClientInfoByRequestParametersOrInsert(ExternalAdRingsClientStart.java:1329)
at com.inneractive.client.ExternalAdRingsClientStart.newClientSession(ExternalAdRingsClientStart.java:245)
at com.inneractive.simpleM2M.web.SimpleM2MProtocolBean.generateCampaign(SimpleM2MProtocolBean.java:235)
at com.inneractive.simpleM2M.web.SimpleM2MProtocolBean.generateCampaign(SimpleM2MProtocolBean.java:219)
at com.inneractive.simpleM2M.web.AdsServlet.doGet(AdsServlet.java:175)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:396)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Notice this:
[size:125; busy:90; idle:0; lastwait:1000]
Where are the connections that are not busy?
The busy number kept going down after this,
but we still didnt manage to get any connections.
Any ideas?
Configuration:
<Resource auth="Container" driverClassName="com.mysql.jdbc.Driver"
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory" loginTimeout="10000"
maxActive="35" maxIdle="35" maxWait="1000" name="jdbc/mysql"
password="-----" testOnBorrow="true" testOnReturn="false" type="javax.sql.DataSource"
url="jdbc:mysql://localhost:3306/my_db?elideSetAutoCommits=true&useDynamicCharsetInfo=false&rewriteBatchedStatements=true&useLocalSessionState=true&useLocalTransactionState=true&alwaysSendSetIsolation=false&cacheServerConfiguration=true&noAccessToProcedureBodies=true&useUnicode=true&characterEncoding=UTF-8"
username="root" validationQuery="SELECT 1"/>
env: ubuntu and tomcat 6. db - mysql
Taking a look at the source of ConnectionPool.java you seem to hit this code snippet in the borrowConnection() method:
//we didn't get a connection, lets see if we timed out
if (con == null) {
if ((System.currentTimeMillis() - now) >= maxWait) {
throw new SQLException("[" + Thread.currentThread().getName()+"] " +
"Timeout: Pool empty. Unable to fetch a connection in " + (maxWait / 1000) +
" seconds, none available["+busy.size()+" in use].");
} else {
//no timeout, lets try again
continue;
}
}
So according to this, your connection is Null.
The value of con is retrieved on the line:
PooledConnection con = idle.poll();
if you track the code, you will see idle is (depending on your configuration, but by default) FairBlockingQueue. You may checkout the implementation for hints.
In general you always have to close ResultSets, Statements, and Connections and used connections should be correctly released back to the pool.
Not doing so correctly may result in connections never been closed => never being available again for reuse (connection pool "leaks").
I suggest you construct some detailed logging over the state of the pool and monitor it to isolate the problem.
Some guidelines from Apache for preventing database connection pool leaks:
removeAbandoned="true"
abandoned database connections are removed and recycled
removeAbandonedTimeout="60"
set the number of seconds a database connection has been idle before it is considered abandoned
logAbandoned="true"
log a stack trace of the code which abandoned the database connection resources. Keep in mind that "logging of abandoned Connections adds overhead for every Connection borrow because a stack trace has to be generated."
I still think slightly increasing the maxWait value (1200, 1500, 1700 - just experiment, there will be no difference in the response times from user perspective) will clear those rare cases, in which you still have problems.
"where are the connections that are not busy ?"
It sounds like they've been dropped and for some reason your connection pool isn't trying to reconnect them.
Add this to the URL that you're connecting to:
autoReconnect=true
And add this as a property to the resource should cause dead connections to be reconnected automatically.
validationQuery="SELECT 1"
Also this should allow you to see connections being dropped:
logAbandoned="true"
There's multiple similar questions on stack overflow.
Tomcat connection pooling,idle connections,and connection creation
JDBC Connection pool not reopening Connections in tomcat
However it may also be that you're not releasing the connections completely which is the cause of them dying.
JDBC MySql connection pooling practices to avoid exhausted connection pool
seems to be a bug in the pool, size variable is incremented, then trying to create connection,
but if creation fails... we have size value large and no actual connections in pool - terrible :
//if we get here, see if we need to create one
//this is not 100% accurate since it doesn't use a shared
//atomic variable - a connection can become idle while we are creating
//a new connection
if (size.get() < getPoolProperties().getMaxActive()) {
//atomic duplicate check
if (size.addAndGet(1) > getPoolProperties().getMaxActive()) {
//if we got here, two threads passed through the first if
size.decrementAndGet();
} else {
//create a connection, we're below the limit
return createConnection(now, con, username, password);
}
} //end if
I'm using tomcat 7 and the tomcat jdbc connection pool to dish out mysql connections.
During night times we don't have any activity so all connections become idle for longer than 8 hours and are dropped by mysql. (mysql's wait_timeout default).
We use the following pool configuration:
<Resource name="jdbc/dbName"
auth="Container"
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
type="javax.sql.DataSource"
maxActive="50"
maxIdle="30"
maxWait="5000"
driverClassName="com.mysql.jdbc.Driver"
validationQuery="SELECT 1"
testOnBorrow="true"
testWhileIdle="true"
timeBetweenEvictionRunsMillis="10000"
removeAbandoned="true"
removeAbandonedTimeout="60"
logAbandoned="true"
username="xxx"
password="xxx"
url="jdbc:mysql://host:3306/xxx"/>
I was expecting the EvictionPolicy to remove idle connections way before they ever get closed by MySql. Somehow after one day we get the following exception:
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: No operations allowed after connection closed.Connection was implicitly closed by the driver.
I guess this problem should be something the jdbc connection pool can fix, but there are many configuration properties and I haven't used this pool before. Anybody got a good set of properties to configure the pool to not dish out closed connections?
Kind regards,
Albert
Solved it. Turned out it wasn't a pooling problem after all. We were using squeryl and lift together which isn't a happy combi (just yet). Connections got closed before returned to the pool.
Ditching lift's DB connection management in favor of squeryl's solved it.