I seem to have a problem, where I get a communications exception when I try to write to the database. It seems to happen after a perioid of inactivity, but I'm not sure.
Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.6.1.v20150605-31e8258): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 40,404,396 milliseconds ago. The last packet sent successfully to the server was 40,404,396 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
I tried to set the timeout higher, and added autoReconnect=true to the connection string. When the exception is thrown, it does retry 4 times, and then stops.
The fun thing is, that the database is located within the same server as the application-server. How come this happens, and how do I fix this?
I really hope you guys can help me!
Best regards,
Ben
EDIT
As pointed out, this questions is seen multiple places. Unfortunately the proposed solutions didn't help me. Either they had an actual communications failure (Not running mysql local on the server) or they fixed it with another connection string. I've tried all that was said, and still getting the error.
Latest error from the weekend is shown here: http://pastebin.com/wMb7Ygwd
Related
Here is the log of this exception:
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet successfully received from the server was 1,409,240 milliseconds ago. The last packet sent successfully to the server was 1,409,267 milliseconds ago.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
at java.lang.reflect.Constructor.newInstance(Unknown Source)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:425)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:989)
at com.mysql.jdbc.MysqlIO.nextRowFast(MysqlIO.java:2229)
at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1989)
at com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:3410)
at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:470)
at com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:3112)
at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:2341)
...
Caused by: java.io.EOFException: Can not read response from server. Expected to read 7 bytes, read 5 bytes before connection was unexpectedly lost.
at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3011)
at com.mysql.jdbc.MysqlIO.nextRowFast(MysqlIO.java:2212)
I know this exception is quite normal, and I've googled it and got a lot of solutions. However, none of those solutions fix my problem. Well, it's just a simple java application, not a java web application, and I didn't use any connecting pool but simply used JDBC. My mysql version is 5.7.12. And the mysql is running on a windows server, while the java application is running on linux. I have checked the 'wait_timeout' for mysql and it's 28800, which is much larger than 1409240 ms. So the problem should probably not be caused by this issue. I've also checked the tcp connection wait time on my linux, it's 7200s, still much bigger than 1409240 ms. I also tried add '?autoReconnect=true' to the JDBC url, but it still made no difference. And I'm sure that there is nothing wrong with the accessibility of my server, cause the connection does work for several minutes before the problem occurs.I've almost tried any thing I can do. However, the problem still persists. What should I do? Is there any posibility that the problem is caused by the windows firewall?
edit:
This problem occurs when executing a SELECT query tries to select all data items from a table whose size is 10.9 G. Maybe this table is just too big so that the sentence ResultSet rs = stmt.executeQuery(sql); cannot be done. But I've checked the 'max_execution_time' variable of mysql, it's set to 0, indicating there is no restriction of execution time.
Welcome to TCP/IP. Lots of things can cause loss of a TCP connection, especially one that has been idle for a while. One such thing, as you mention, is a firewall. The connection can be knocked down by some network entity even if both the client and the server agree it should be kept alive. Connection loss likelihood goes up when client and server are not on the same local network.
Figuring out why means doing lots of packet monitoring at various places in the network. That can use up a lot of time and effort and not teach you much. Plus, learning to read wireshark output is a real task.
Most client-server programmers who need long-lasting connections use some kind of keepalive operation to avoid having the connection sit idle for too long. Keepalive operations send something and get something back. In your case you could do this the easy way by issuing a SELECT NOW() query (or some other round-trip no-op) once every minute or two while your client sits otherwise idle.
The best way to handle this kind of thing is to open the database connection when you need it, then close it when you're done. If you use the connection pooling feature you can open and close your pooled connection upon every query, and still avoid churning the physical connections. The JDBC connector and MySQL server code are optimized for this approach; it is probably the best way to go.
I had com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure reading more on the error message I realized it had to do with my SSL setting in the connection string.
When you have SSL enabled in your connection string for MySQL.Please make sure your date and time are correct or in sync with the current date. If not the connection will be hit with javax.net.ssl.SSLHandshakeException resulting in The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
. prompting you to do something about your date. This is how I solved mine. I had to set my date and time correctly. Sometimes the error messages say a lot about the problem.
I'm using Play Framework 2.2.1, MySQL 5.5 and sorm 0.3.10
Since MySQL drops inactive connections after specified idle timeout, I'm getting this exception in my app:
[CommunicationsException: Communications link failure The last packet successfully received from the server was 162 701 milliseconds ago. The last packet sent successfully to the server was 0 milliseconds ago.]
As far as I understand, sorm is using c3p0 connection pool. Is it possible to configure somehow c3p0 or sorm to kick mysql with specified delay or reconnect automatically after connection was dropped?
0.3.13-SNAPSHOT of SORM introduces a timeout parameter for Instance with a default setting of 30. This setting determines the amount of seconds the underlying connections are allowed to be idle. When the timeout is reached a sort of a "keepalive" request is sent to the db and the timer is reset. The timer gets reset when any normal query is made as well. The implementation simply relies on the idleConnectionTestPeriod of C3P0.
For further discussion, suggestions and reports please visit the associated ticket on the issue tracker or open another one. If there'll be no complaints in the associated ticket, this change will make it into the 0.3.13 release.
it's very easy to resolve this issue with c3p0, but i'd double check whether you are using it. BoneCP is the default play2 Connection pool. it would be easy to solve this problem with BoneCP too!
in c3p0, config params maxIdleTime, maxConnectionAge, or (much better yet) a Connection testing regime, would help. see http://www.mchange.com/projects/c3p0/#configuring_connection_testing
if you want to use c3p0 in play2, see https://github.com/swaldman/c3p0-play
I have an interesting issue which I have not been able to resolve. I am using Play! 2.0.4 and using the integrated BoneCP connection pool to get the DB connections. However, for some reason, BoneCP keeps returning closed connections.
Database Server: Amazon RDS MySQL 5, default timeout settings (which should be 8 hours...)
My Play Datasource configuration looks as follows:
db.default.driver=com.mysql.jdbc.Driver
db.default.url="jdbc:mysql://{server}/{schema}?autoReconnect=true&useUnicode=yes&characterEncoding=UTF-8"
db.default.partitionCount=4
db.default.idleConnectionTestPeriod=2 minutes
I had assumed setting the idleConnectionTestPeriod to 2 minutes surely would have prevented BoneCP from returning closed connections, but it hasn't.
Every so often, I get the following stack trace in my logs:
Exception in thread "pool-6-thread-25" java.sql.SQLException: Connection is closed!
at com.jolbox.bonecp.ConnectionHandle.checkClosed(ConnectionHandle.java:350)
at com.jolbox.bonecp.ConnectionHandle.setReadOnly(ConnectionHandle.java:1089)
at play.api.db.BoneCPApi$$anon$1.onCheckOut(DB.scala:328)
at com.jolbox.bonecp.BoneCP.getConnection(BoneCP.java:514)
at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:114)
at play.api.db.DBApi$class.getConnection(DB.scala:64)
at play.api.db.BoneCPApi.getConnection(DB.scala:273)
at play.api.db.DB$$anonfun$getConnection$1.apply(DB.scala:129)
at play.api.db.DB$$anonfun$getConnection$1.apply(DB.scala:129)
at scala.Option.map(Option.scala:133)
at play.api.db.DB$.getConnection(DB.scala:129)
at play.api.db.DB.getConnection(DB.scala)
at play.db.DB.getConnection(DB.java:50)
at play.db.DB.getConnection(DB.java:43)
at play.db.DB.getConnection(DB.java:29)
at com.edatasource.inboxtracker.tasks.TrackSiteEventActionTask.run(TrackSiteEventActionTask.java:23)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Does anybody know how I can fix this issue? Currently, I've had to wrap the DB.getConnection() in a try/catch and just catch the exception thrown by BoneCP and retry until I retrieve a valid connection. Seems like that should be unnecessary.
Thanks for any help.
Please try with 0.8.0-beta1. There was a bug related to this.
This question already has answers here:
Why does autoReconnect=true not seem to work?
(2 answers)
Closed 5 years ago.
Occasionally, my Java/Tomcat6/Debian Squeeze application can't talk to the MySql server.
The Tomcat application is on a front-end server and MySql is on a separate, MySql-only box. A typical error is:
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was56588 milliseconds ago.
The last packet sent successfully to the server was 56588 milliseconds ago, which
is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the
server configured values for client timeouts, or using the Connector/J connection property
'autoReconnect=true' to avoid this problem.
The timeout time given is only 60 seconds, which seems very short. If it was an hour or more, I would simply setup a background task to ping the DB-server every few minutes. I've added the autoReconnect parameter to the opening URL, with no obvious impact.
Any idea as to what the problem is here?
Thanks
Pat
You should code for network glitches and deal with auto-reconnect yourself.
auto-reconnect is deliberately OFF because of several 'application' bugs that can silently happen when the connection goes away and comes back in a different state.
Anyway, a comment shows that this is somewhat a Duplicate question.
Configure c3p0 properties for overcome this issue. Use properties like,
hibernate.connection.provider_class=org.hibernate.connection.C3P0ConnectionProvider
hibernate.c3p0.min_size=0
hibernate.c3p0.max_size=20
hibernate.c3p0.timeout=500
hibernate.c3p0.max_statements=50
hibernate.c3p0.idle_test_period=3000
hibernate.c3p0.testConnectionOnCheckout=true
hibernate.c3p0.acquire_increment=1
with JDBC connection URL url=jdbc:mysql://host/databasename?autoReconnect=true
I have a grails application that is in production now. This morning I was alerted to the fact that the server was not resolving. Tomcat was spinning and spinning. I researched and it looks like it has to do with MySQL causing the connection to timeout after 8 hours of inactivity. I have found examples on stackoverflow of people having similar problems. However, all of these people mention that if they hit the server again and the connection is refreshed. For me, the site was down entirely and Tomcat wouldn't respond. Does it sound like something else could be at play here?
Last Exception in Tomcat log
2011-Aug-30 23:58:43,283 [TP-Processor19] org.hibernate.util.JDBCExceptionReporter
ERROR The last packet successfully received from the server was 37,118,147 milliseconds ago. The last packet sent successfully to the server was 37,122,138 milliseconds ago. \
is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing \
the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
2011-Aug-30 23:58:43,290 [TP-Processor19] org.codehaus.groovy.grails.web.errors.GrailsExceptionResolver
ERROR Exception occurred when processing request: [GET] /picks/ncaafb
Stacktrace follows:
java.net.SocketException: Connection timed out
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3302)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1940)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2113)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2568)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2113)
at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:2275)
at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:2275)
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:96)
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:96)
at sportsdb.Season.getCurrentNCAAFootballSeason(Season.groovy:93)
at PicksController$_closure2.doCall(PicksController.groovy:60)
at PicksController$_closure2.doCall(PicksController.groovy)
at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:190)
at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:291)
at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:774)
at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:703)
at org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:896)
at java.lang.Thread.run(Thread.java:662)
2011-Aug-30 23:58:43,315 [TP-Processor19] org.hibernate.util.JDBCExceptionReporter
ERROR Already closed.
2011-Aug-30 23:58:43,315 [TP-Processor19] org.hibernate.util.JDBCExceptionReporter
ERROR Already closed.
2011-Aug-30 23:58:43,316 [TP-Processor19] org.codehaus.groovy.grails.web.servlet.GrailsDispatcherServlet
ERROR HandlerInterceptor.afterCompletion threw exception
org.hibernate.exception.GenericJDBCException: Cannot release connection
at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:190)
at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:291)
at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:774)
at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:703)
at org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:896)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.sql.SQLException: Already closed.
at org.apache.commons.dbcp.PoolableConnection.close(PoolableConnection.java:114)
at org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.close(PoolingDataSource.java:191)
at $Proxy7.close(Unknown Source)
... 6 more
My plan is to implement the solution mentioned in the link above, but I wanted to make sure nothing else visibly fishy was going on since we have a somewhat different result (their connections are refreshing and mine are not).
If you're using a Tomcat JNDI DataSource look at some of the parameters you can set on the datasource such as testOnBorrow. If validation fails the connection will be dropped from the pool. There is some performance overhead that you will incur by testing connections, but it should fix problems like this. If you have minIdle/maxIdle set high that would explain why you continue to experience the problem while reconnecting fixes it for other people.