I seem to have a problem, where I get a communications exception when I try to write to the database. It seems to happen after a perioid of inactivity, but I'm not sure.
Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.6.1.v20150605-31e8258): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 40,404,396 milliseconds ago. The last packet sent successfully to the server was 40,404,396 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
I tried to set the timeout higher, and added autoReconnect=true to the connection string. When the exception is thrown, it does retry 4 times, and then stops.
The fun thing is, that the database is located within the same server as the application-server. How come this happens, and how do I fix this?
I really hope you guys can help me!
Best regards,
Ben
EDIT
As pointed out, this questions is seen multiple places. Unfortunately the proposed solutions didn't help me. Either they had an actual communications failure (Not running mysql local on the server) or they fixed it with another connection string. I've tried all that was said, and still getting the error.
Latest error from the weekend is shown here: http://pastebin.com/wMb7Ygwd
I'm running a Node server connecting to MySQL via the node-mysql module. Connecting to and querying MySQL works great initially without any errors, however, the first query after leaving the Node server idle for a couple hours results in an error. The error is the familiar read ECONNRESET, coming from the depths of the node-mysql module.
A stack trace (note that the three entries of the trace belong to my app's error reporting code):
Error
at exports.Error.utils.createClass.init (D:\home\site\wwwroot\errors.js:180:16)
at new newclass (D:\home\site\wwwroot\utils.js:68:14)
at Query._callback (D:\home\site\wwwroot\db.js:281:21)
at Query.Sequence.end (D:\home\site\wwwroot\node_modules\mysql\lib\protocol\sequences\Sequence.js:78:24)
at Protocol.handleNetworkError (D:\home\site\wwwroot\node_modules\mysql\lib\protocol\Protocol.js:271:14)
at PoolConnection.Connection._handleNetworkError (D:\home\site\wwwroot\node_modules\mysql\lib\Connection.js:269:18)
at Socket.EventEmitter.emit (events.js:95:17)
at net.js:441:14
at process._tickCallback (node.js:415:13)
This error happens both on my cloud Node server and MySQL server as well as a local setup of both.
My questions:
Does this problem appear to be a disconnection of Node's connection to my MySQL server(s), perhaps due to a connection lifetime limitation?
When using connection pools, node-mysql is supposed to gracefully handle disconnections and prune them from the pool. Is it not aware of the disconnect until I make a query, thus making the error unavoidable?
Considering that I see the "read ECONNRESET" error a lot in other StackOverflow posts, should I be looking elsewhere from MySQL to diagnose the problem?
Update: After more browsing, I think my issue is a duplicate of this one. It appears his connection is disconnecting as well, but no one has suggested how to keep the connection alive or how to address the error outside of failing on the first query back.
I reached out to the node-mysql folks on their Github page and got some firm answers.
MySQL does indeed prune idle connections. There's a MySQL variable "wait_timeout" that sets the number of second before timeout and the default is 8 hours. We can set the default to be much larger than that. Use show variables like 'wait_timeout'; to view your timeout setting and set wait_timeout=28800; to change it.
According to this issue, node-mysql doesn't prune pool connections after these sorts of disconnections. The module developers recommended using a heartbeat to keep the connection alive such as calling SELECT 1; on an interval. They also recommended using the node-pool module and its idleTimeoutMillis option to automatically prune idle connections.
If this happens when establishing a single reused connection, it can be avoided by establishing a connection pool instead.
For example, if you're doing something like this...
var db = require('mysql')
.createConnection({...})
.connect(function(err){});
do this instead...
var db = require('mysql')
.createPool({...});
Does this problem appear to be a disconnection of Node's connection to my MySQL server(s), perhaps due to a connection lifetime limitation?
Yes. The server has closed its end of the connection.
When using connection pools, node-mysql is supposed to gracefully handle disconnections and prune them from the pool. Is it not aware of the disconnect until I make a query, thus making the error unavoidable?
Correct, but it should handle the error internally, not pass it back to you. This appears to be a bug in node-mysql. Report it.
Considering that I see the "read ECONNRESET" error a lot in other StackOverflow posts, should I be looking elsewhere from MySQL to diagnose the problem?
It is either a bug in the node-MySQL connection pool implementation, o else you haven't configured it properly to detect failures.
I have been also facing the same issue. Apparently it was happening because one of the backend process has been triggered on table which was being referred in my api.
This caused table to go in lock wait state and my query request got failed with connection reset. Though i'm wondering why i didn't receive lock wait error .
I'm using Play Framework 2.2.1, MySQL 5.5 and sorm 0.3.10
Since MySQL drops inactive connections after specified idle timeout, I'm getting this exception in my app:
[CommunicationsException: Communications link failure The last packet successfully received from the server was 162 701 milliseconds ago. The last packet sent successfully to the server was 0 milliseconds ago.]
As far as I understand, sorm is using c3p0 connection pool. Is it possible to configure somehow c3p0 or sorm to kick mysql with specified delay or reconnect automatically after connection was dropped?
0.3.13-SNAPSHOT of SORM introduces a timeout parameter for Instance with a default setting of 30. This setting determines the amount of seconds the underlying connections are allowed to be idle. When the timeout is reached a sort of a "keepalive" request is sent to the db and the timer is reset. The timer gets reset when any normal query is made as well. The implementation simply relies on the idleConnectionTestPeriod of C3P0.
For further discussion, suggestions and reports please visit the associated ticket on the issue tracker or open another one. If there'll be no complaints in the associated ticket, this change will make it into the 0.3.13 release.
it's very easy to resolve this issue with c3p0, but i'd double check whether you are using it. BoneCP is the default play2 Connection pool. it would be easy to solve this problem with BoneCP too!
in c3p0, config params maxIdleTime, maxConnectionAge, or (much better yet) a Connection testing regime, would help. see http://www.mchange.com/projects/c3p0/#configuring_connection_testing
if you want to use c3p0 in play2, see https://github.com/swaldman/c3p0-play
I have a webapp (Tomcat/Hibernate/DBCP 1.4) that runs queries against MySQL, and this works fine for a certain load, say 50 queries a second. When I route the same moderate load through HAProxy (still just using a single database), I get a failure, maybe one for every 500 queries. My app reports:
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet successfully received from the server was 196,898 milliseconds ago. The last packet sent successfully to the server was 0 milliseconds ago.
at sun.reflect.GeneratedConstructorAccessor210.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1117)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3567)
...
Caused by: java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost.
at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3017)
...
Meanwhile the HAProxy log is showing a lot of entries like:
27] mysql mysql/db03 0/0/34605 2364382 cD 3/3/3/3/0 0/0
Oct 15 15:43:12 localhost haproxy[3141]: 127.0.0.1:35500 [15/Oct/2012:15:42:50.0
The "cD" apparently indicates a state of client timeout. So whereas my webapp is saying that HAProxy is refusing to accepting new connections, HAProxy is saying that my webapp is not accepting data back.
I am not including my HAProxy configuration, because I've tried many different parameter values, with essentially the same result. In particular, I've set maxconn to both high and low values, in both global and server sections, and what always happens in the stats is that the max sessions rises to no more than about 7. My JDBC pool size is also high.
Is it generally ok to use a JDBC pool and a HAProxy pool together? Have people run into this kind of problem before?
I have an idea on how to solve this, which is to send a "validation query" before every query. But there's a certain overhead there, and I'd still like to know why my webapp succeeds when it goes straight to MySQL, but gets dropped connections on going through HAProxy.
How can I debug further and get more information than just "cD"? I tried running HAProxy in debug mode, but it doesn't seem to reveal anything more.
Try this:
tune.bufsize 20480
tune.maxrewrite 2048
See the ha-docs for their meaning. You have to do this with all eyes on it as you're entering the grey zone of potential lethal params. But it's worth a try to see if this works. I just solved a problem that made no sense vs. the documentation with this.
The defaults are 16k vs 1k.
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 6 years ago.
Improve this question
I'm getting a SQL Server error:
A transport-level error has occurred
when receiving results from the
server. (provider: Shared Memory
Provider, error: 0 - The handle is
invalid.)
I'm running Sql Server 2008 SP1, Windows 2008 Standard 64 bit.
It's a .Net 4.0 web application. It happens when a request is made to the server. It's intermittent. Any idea how I can resolve it?
The database connection is closed by the database server. The connection remains valid in the connection pool of your app; as a result, when you pickup the shared connection string and try to execute it's not able to reach the database. If you are developing Visual Studio, simply close the temporary web server on your task bar.
If it happens in production, resetting your application pool for your web site should recycle the connection pool.
Try the following command on the command prompt:
netsh interface tcp set global autotuning=disabled
This turns off the auto scaling abilities of the network stack
I had the same problem. I restarted Visual Studio and that fixed the problem
Transport level errors are often linked to the connection to sql server being broken ... usually network.
Timeout Expired is usually thrown when a sql query takes too long to run.
So few options can be :
Check for the connection in VPN (if used) or any other tool
Restart IIS
Restart machine
Optimize sql queries.
For those not using IIS, I had this issue when debugging with Visual Studio 2010. I ended all of the debugger processes: WebDev.WebServer40.EXE which solved the issue.
All you need is to Stop the ASP.NET Development Server and run the project again
If you are connected to your database via Microsoft SQL Server Management, close all your connections and retry.
Had this error when connected to another Azure Database, and worked for me when closed it.
Still don't know why ..
Look at the MSDN blog which details out this error:
Removing Connections
The connection pooler removes a connection from the pool after it has
been idle for a long time, or if the pooler detects that the
connection with the server has been severed.
Note that a severed connection can be detected only after attempting
to communicate with the server. If a connection is found that is no
longer connected to the server, it is marked as invalid.
Invalid connections are removed from the connection pool only when
they are closed or reclaimed.
If a connection exists to a server that has disappeared, this
connection can be drawn from the pool even if the connection pooler
has not detected the severed connection and marked it as invalid.
This is the case because the overhead of checking that the connection
is still valid would eliminate the benefits of having a pooler by
causing another round trip to the server to occur.
When this occurs, the first attempt to use the connection will detect
that the connection has been severed, and an exception is thrown.
Basically what you are seeing is that exception in the last sentence.
A connection is taken from the connection pool, the application does
not know that the physical connection is gone, an attempt to use it is
done under the assumption that the physical connection is still there.
And you get your exception.
There are a few common reasons for this.
The server has been restarted, this will close the existing connections.
In this case, have a look at the SQL Server log, usually found at:
C:\Program Files\Microsoft SQL Server\\MSSQL\LOG
If the timestamp for startup is very recent, then we can suspect that
this is what caused the error. Try to correlate this timestamp with
the time of exception.
2009-04-16 11:32:15.62 Server Logging SQL Server messages in file
‘C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\ERRORLOG’.
Someone or something has killed the SPID that is being used.
Again, take a look in the SQL Server log. If you find a kill, try to
correlate this timestamp with the time of exception.
2009-04-16 11:34:09.57 spidXX Process ID XX was killed by
hostname xxxxx, host process ID XXXX.
There is a failover (in a mirror setup for example) again, take a look in the SQL Server log.
If there is a failover, try to correlate this timestamp with the time
of exception.
2009-04-16 11:35:12.93 spidXX The mirrored database “” is changing roles from “PRINCIPAL” to “MIRROR” due to
Failover.
Was getting this, always after about 5 minutes of operation. Investigated and found that a warning from e1iexpress always occurred before the failure. This apparently is an error having to do with certain TCP/IP adapters. But changing from WiFi to hardwired didn't affect it.
So tried Plan B and restarted Visual Studio. Then it worked fine.
On closer study I noticed that, when working correctly, the message The Thread '<No Name>' has exited with code 0 occurred at almost exactly the time the run crashed in previous attempts. Some Googling reveals that that message comes up when (among other things) the server is trimming the thread pool.
Presumably there was a bogus thread in the thread pool and every time the server attempted to "trim" it it took the app down.
You get this message when your script make SQL Service stopped for some reasons. so if you start SQL Service again perhaps your problem will be resolved.
I know this may not help everyone (who knows, maybe yes), but I had the same problem and after some time, we realized that the cause was something out of the code itself.
The computer trying to reach the server, was in another network, the connection could be established but then dropped.
The way we used to fix it, was to add a static route to the computer, allowing direct access to the server without passing thru the firewall.
route add –p YourServerNetwork mask NetworkMask Router
Sample:
route add –p 172.16.12.0 mask 255.255.255.0 192.168.11.2
I hope it helps someone, it's better to have this, at least as a clue, so if you face it, you know how to solve it.
I got the same error in Visual Studion 2012 development environment, stopped the IIS Express and rerun the application, it started working.
I had the same issue. I solved it, truncating the SQL Server LOG.
Check doing that, and then tell us, if this solution helped you.
For me the solution was totally different.
In my case I had an objectsource which required a datetimestamp parameter. Even though that ODS parameter ConvertEmptyStringToNull was true 1/1/0001 was being passed to SelectMethod. That in turn caused a sql datetime overflow exception when that datetime was passed to the sql server.
Added an additional check for datetime.year != 0001 and that solved it for me.
Weird that it would throw a transport level error and not a datetime overflow error.
Anyways..
In my case the "SQL Server" Server service stopped. When I restarted the service that enabled me to run the query and eliminate the error.
Its also a good idea to examine your query to find out why the query made this service stop
For me the answer is to upgrade the OS from 2008R2 to 2012R2, the solution of iisreset or restart apppool didn't work for me.
I also tried to turn of TCP Chimney Offload setting, but I didn't restart the server because it is a production server, which didn't work either.
We encountered this error recently between our business server and our database server.
The solution for us was to disable "IP Offloading" on the network interfaces.
Then the error went away.
One of the reason I found for this error is 'Packet Size=xxxxx' in connection string. if the value of xxxx is too large, we will see this error. Either remove this value and let SQL server handle it or keep it low, depending on the network capabilities.
It happened to me when I was trying to restore a SQL database and checked following Check Box in Options tab,
As it's a stand alone database server just closing down SSMS and reopening it solved the issue for me.
This occurs when the database is dropped and re-created some shared resources is still considering the database still exists, so when you re-run execute query to create tables in the database after it was re-created the error will not show again and Command(s) completed successfully. message will show instead of the error message Msg 233, Level 20, State 0, Line 0 A transport-level error has occurred when sending the request to the server. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.).
Simply ignore this error when you are dropping and recreating databases and re-execute your DDL queries with no worries.
I faced the same issue recently, but i was not able to get answer in google.
So thought of sharing it here, so that it can help someone in future.
Error:
While executing query the query will provide few output then it will throw below error.
"Transport level error has occurred when receiving output from
server(TCP:provider,error:0- specified network name is no longer
available"
Solution:
Check the provider of that linked server
In that provider properties ,Enable "Allow inprocess" option for that particular provider to fix the issue.