I am getting the MySQL error
"Got an error reading communication packets"
in MySQL.err file and in my application side I am getting 2013 error (lost connection during query).
All the timeout values are (in seconds):
wait_timeout = 60
net_read_timeout = 30
connect_timeout = 30
How to resolve this?
For what this is worth, this vague error is somewhat common with many possible culprits.
Often it is not a problem with MySql per say...but the system or calling program instead. eg Sometimes you have php memory limits set to low or the swap drive was never setup...that crashes PHP and MySQL is left confused as to what happened.
In my case, I didn't have that, but I was trying to do a json_encode on a character string with non-utf8 characters which would fail silently and drag mysql down with it. On top of this OPCache in PHP7 seems to be seriously buggy and lacks proper error logging. I had to disable that as well...then magically all my problems with "communication packets" in /var/log/mysql/error.log went away.
Hope this helps someone...gave me a LOT of grief.
Another very weird corner case: check your RAM and your swap!
If you have not enough RAM and swap space, your operating system may decide to kill processes causing your exact error in case of MySQL (Got an error reading communication packets). This is a common situation in virtual machines where the provider often does not install any partition for the swap (and when often people does not pay enough RAM).
So, double-check with top or htop to verify if you have ~100% of your RAM in use or if you have not enough swap (or not swap space at all).
In that case, buy more RAM or search "how to install swapfile" in your distribution.
If that's not the case, see the other answers. Cheers! :)
Related
I have checked the answers at MySQL error 2006: mysql server has gone away None of them seem to fit my problem.
I am getting the error MySQL server has gone away frequently.
It is not the connection timeout. The default timeout of 8 hours seems plenty.
I have tried upping the max_allowed_packet to no avail. This then seemed irrelevant when I began printing out the offending SQL statement which was in my case: SELECT url FROM crawled WHERE frontier = 1 ORDER BY id. Hardly a large statement which warrants upping max_allowed_packet.
So, none of the given answers seem to fit my scenario. Any other reasons why this error may occur? Any possible fixes?
Two common possibilities come to mind:
1) Out Of Memory error. Check syslog for evidence of it.
2) Bug or some other crash in mysqld thread. Check your MySQL error log.
The "server has gone away" almost always means a back end thread crash. And that should leave something obvious in the logs.
I am very frequently getting this error in MySQL:
OS errno 24 - Too many open files
What's the cause and what are the solutions?
I was getting the errno: 24 - Too many open files too often when i was using many databases at the same time.
Solution
ensure that the connections to db server close propertly
edit /etc/systemd/system.conf. Uncomment and make
DefaultLimitNOFILE=infinity
DefaultLimitMEMLOCK=infinity
then run systemctl daemon-reload and service mysql restart.
You can check the results with the query: SHOW GLOBAL VARIABLES LIKE 'open_files_limit' and you may notice that the value has changed. You should not have any errno 24 now.
Please notice that the solution may differ from other OS/versions. You can try to locate the variables first.Tested with Ubuntu 16.04.3 and mysql 5.7.19.
In my case it was useless to setting up the open_files_limit variable in mysql configuration files as the variable is flagged as a readonly.
I hope it helped!
You probably have a connection leak in your application, that is why open connections are not closed once the function completes it's execution.
I would probably look into the application code and see where the connections/preparedstatement (if it's java) objects are not closed and fix it.
A quick workaround is to increase ulimit of the server (explained here) which would increase number of open file descriptors (i.e. connections). However, if you have a connection leak, you will encounter this error again, at later stages.
I faced the same problem and found a solution on another stackoverflow-question.
By running the following snippet with Bash:
ulimit -n 30000
I've a MySQL 5.1.41 Server installed on a Ubuntu machine. I get connected to it through Workbench from my Windows machine over TCP/IP. I run a bigger query, after 900 seconds I got the below message, (there is no wait_timeout defined in the server's configuration file my.cnf)
Error Code: 2013. Lost connection to MySQL server during query
But when I look into the process list by using show processlist; command, I can still see my query running.
I got this link http://dev.mysql.com/doc/refman/5.0/en/gone-away.html where I found the below lines,
The problem on Windows is that in some cases MySQL does not get an
error from the OS when writing to the TCP/IP connection to the server,
but instead gets the error when trying to read the answer from the
connection.
I'm not sure whether this is the reason for my observation.
Please clarify me on this.
Thanks in advance!!
Closing connection is not a reason to stop a query. A query might be update, or kind of transaction, or select with output to remote (server) file.
Closed connection is just is just means, that you will not receive any data from DBMS after executing query (data, timings - nothing).
The reason of closing connection could be different, as SO-User posted. Try increasing
on server side:
wait_timeout
max_allowed_packet
on client side:
any kinds of timeout you find in your client (i.e. that SO-User suggests)
Do not forget to reload DBMS config and restart client (for sure)
In MySQL WorkBench we have an option to change timeout.
Find it under
Edit → Preferences → SQL Editor → DBMS connection read time out (in seconds): 600
Changed the value to 6000 or something higher.
Update
Lost connection to MySQL server
There are three likely causes for this error message.
Usually it indicates network connectivity trouble and you should check
the condition of your network if this error occurs frequently. If the
error message includes “during query,” this is probably the case you
are experiencing.
Sometimes the “during query” form happens when millions of rows are
being sent as part of one or more queries. If you know that this is
happening, you should try increasing net_read_timeout from its default
of 30 seconds to 60 seconds or longer, sufficient for the data
transfer to complete.
More rarely, it can happen when the client is attempting the initial
connection to the server. In this case, if your connect_timeout value
is set to only a few seconds, you may be able to resolve the problem
by increasing it to ten seconds, perhaps more if you have a very long
distance or slow connection. You can determine whether you are
experiencing this more uncommon cause by using SHOW GLOBAL STATUS LIKE
'Aborted_connects'. It will increase by one for each initial
connection attempt that the server aborts. You may see “reading
authorization packet” as part of the error message; if so, that also
suggests that this is the solution that you need.
If the cause is none of those just described, you may be experiencing
a problem with BLOB values that are larger than max_allowed_packet,
which can cause this error with some clients. Sometime you may see an
ER_NET_PACKET_TOO_LARGE error, and that confirms that you need to
increase max_allowed_packet.
Doc link: Error lost connection
and also check here
I'm using Node.js to run a web-server for my web application. I'm also using the node-mysql module to interface with a MySQL server for all my persistent database needs.
Whenever there is a critical error within my Node.js application that crashes my app's process I get an email sent to me. So, I keep getting this email with an error saying "Too many connections". Here's an example of the error:
Error: Too many connections
at Function.Client._packetToUserObject (/apps/x/node_modules/mysql/lib/client.js:394:11)
at Client._handlePacket (/apps/x/node_modules/mysql/lib/client.js:307:43)
at Parser.EventEmitter.emit (events.js:96:17)
at Parser.write.emitPacket (/apps/x/node_modules/mysql/lib/parser.js:71:14)
at Parser.write (/apps/x/node_modules/mysql/lib/parser.js:576:7)
at Socket.EventEmitter.emit (events.js:96:17)
at TCP.onread (net.js:396:14)
As you can see all it tells me is that the error is coming from the mysql module, but it doesn't tell me where in my application code the issue is occurring.
My application opens a db connection anytime I need to run one or more queries. I immediately close the connection after all my queries and data has been collected. So, I don't understand how I could be exceeding the 151 max_connections limit.
Unless there is a place in my code where I forgot to call db.end() to close the connection, I don't see how my app would leak like this. Even if there was such a mistake, I wouldn't get these emails sent by the dozens. Yesterday, I received almost 100 emails with roughly the same error. How could this be happening? If my application had leaked and allocated connections over time, as soon as the first error occurred the app process would crash and all connections would be lost, preventing the app to crash again. Since I received ~100 emails, this means the app crashed ~100 times, and all within a short period of time. This could only mean that somewhere in my application a lot of connections where established in a short period of time, right?
How could I avoid this problem? This is very discouraging. All help is highly appreciated. Thanks
MySQL has a default MAX_CONNECTIONS = '100' not 151 unless you changed it. Also, in truth you have MAX_CONNECTIONS + 1. The plus 1 allows a root user to logon even after you have maxed out the conenctions in order to figure out what is actually being used. When your connections are maxed out try logging on as root and running the following command from MySQL.
mysql> SHOW FULL PROCESSLIST
Post the output of this command above. Once you actually know what is consuming your resources you can go about fixing it.It could easily be your code that is leaving open connections.
You should take a look at the follwoing documentation: Show Processlist
+1 for question. Investigations showed us that node-mysql opens the connections and doesn't close them. Because of that at one moment be reach the limit of max connections. The question is why node-mysql doesn't close the connections?
I have the following error message:
SQLSTATE[HY000] [2003] Can't connect to MySQL server on
'192.168.50.45' (4)
How would I parse this (I have HY000, I have 2003 and I have the (4).
HY000 is a very general ODBC-level error code, and 2003 is the MySQL-specific error code that means that the initial server connection failed. 4 is the error code from the failed OS-level call that the MySQL driver tried to make. (For example, on Linux you will see "(111)" when the connection was refused, because the connect() call failed with the ECONNREFUSED error code, which has a value of 111.)
Using the perror tool that comes with MySQL:
shell> perror 4
OS error code 4: Interrupted system call
It might a bug where incorrect error is reported, in this case, it might a simple connection timeout (errno 111)
FWIW, having spent around 2-3 months looking into this in a variety of ways, we have come to the conclusion that (at least for us), the (4) error happen when the network is too full of data for the connection to complete in a sane amount of time. from our investigations, the (4) occurs midway through the handshaking process.
You can see this in a unix environment by using 'netem' to fake network congestion.
The quick solution is to up the connection timeout parameter. This will hide any (4) error, but may not be the solution to the issue.
The real solution is to see what is happeneing at the DB end at the time. If you are processing a lot of data when this happens, it may be a good ideas to see if you can split this into smaller chunks, or even pas the processing to a different server, if you have that luxury.
I happened to face this problem. Increase the connect_timeout worked out finally.
I was just struggling with the same issue.
Disable the DNS hostname lookups solved the issue for me.
[mysqld]
...
...
skip-name-resolve
Don't forget to restart MySQL to take effect.
#cdhowie While you may be right in other circumstances, with that particular error the (4) is a mysql client library error, caused by a failed handshake. Its actually visible in the source code. The normal reason is too much data causing an internal timeout. Making 'room' for the connection normally sorts it without masking the issue, like upping the timeout or increasing bandwidth.