MariaDB stops working without reason on small centos8 server - mysql

I have small web/mail server with apache/mariadb. Last week we changed some of the WWW code and to make it work I changed in php.ini line :
max_input_vars to 5000 (now 4000, it was 1000 at the start)
And it seems changed something because our mariadb 10.3.28 starts making problems.
It just stops reciving any information.
Restart of mysql (and httpd) helps for 24h now ...
Log:
2022-10-05 14:28:58 2796199 [Warning] Aborted connection 2796199 to db: 'ACTIVEDB' user: 'USER' host: 'localhost' (Got an error reading communication packets)
This kind of warnings shows up sometimes but now we got dozens every hour.
In PHP i decrased max_input_vars, in my.cnf I addedd
max_allowed_packet = 124M
max_connections = 400
log_warnings = 3
Everything was at default values before.
Log level was for some time at level 4 but it stared to get too big without any time give to "crush".
Disk is nvme 500GB, Intel and shows no problems.
I would like to hear :
how to check/connect mariadb when it looks inactive
what and how (step by step) to check
Thanks all

This is not an answer, but too long for a comment.
The error "Aborted connection ... (Got an error reading communication packets)" occurs if a client disconnected without sending a COM_CLOSE notification to the server before.
This behavior is easily reproducible, e.g. by starting the command line client and killing the command line client from another session. Depending on the log_warning level, the server will write a log entry and increase the server status variable aborted_clients (or aborted_connects if this happens during connection handshake).
Here are only a few possible reasons:
Before 10.2.3 the default log_warning level was 1 (no logging of aborted_connections), since 10.2.4 default value is 2 (log aborted connections) - if the server was recently upgraded from 10.2.3 or lower the problem may have already existed in previous installation, but was not written into log file.
The PHP script(s) doesn't close the connection: As soon the script is ready with its work, make sure that all transactions were committed, memory (result sets) were freed, and the connection was closed properly.
A timeout occurred, e.g. wait_timeout was set too low and exceeded or PHP's max_execution_time exceeded (and script was killed) or net_read/write_timeouts occurred.
DNS problems: In this case enable skip-name-resolve, use IP's and verify against IP's.
Network or firewall problems

I would like to thank for sugestions and your time.
Looks like there was a problem with one table, I don't know how/why but after some time it was set up as READ-ONLY. There was no information in log about it up to level 3 (level 4 generated too much information for me :( )
The DB works some time fine (expect for one table) and in time it looks like it just hangs whole DB.
The case is still "under investigation".
About "damaged" table :
listed and read-only related thinks works fine
insert/change hangs the whole DB for 2-3 minutes and then gets back to work
after few "hangs" the DB just freezes
there was nothing strange in logs level 3
copy table to new and then change names to switch tables works (I hope so)
If anyone have any idea how to check table would be great (standard operatinos like check, analyse did nothing).
Thanks again.

Related

MySQL Query running even after losing connection

I've a MySQL 5.1.41 Server installed on a Ubuntu machine. I get connected to it through Workbench from my Windows machine over TCP/IP. I run a bigger query, after 900 seconds I got the below message, (there is no wait_timeout defined in the server's configuration file my.cnf)
Error Code: 2013. Lost connection to MySQL server during query
But when I look into the process list by using show processlist; command, I can still see my query running.
I got this link http://dev.mysql.com/doc/refman/5.0/en/gone-away.html where I found the below lines,
The problem on Windows is that in some cases MySQL does not get an
error from the OS when writing to the TCP/IP connection to the server,
but instead gets the error when trying to read the answer from the
connection.
I'm not sure whether this is the reason for my observation.
Please clarify me on this.
Thanks in advance!!
Closing connection is not a reason to stop a query. A query might be update, or kind of transaction, or select with output to remote (server) file.
Closed connection is just is just means, that you will not receive any data from DBMS after executing query (data, timings - nothing).
The reason of closing connection could be different, as SO-User posted. Try increasing
on server side:
wait_timeout
max_allowed_packet
on client side:
any kinds of timeout you find in your client (i.e. that SO-User suggests)
Do not forget to reload DBMS config and restart client (for sure)
In MySQL WorkBench we have an option to change timeout.
Find it under
Edit → Preferences → SQL Editor → DBMS connection read time out (in seconds): 600
Changed the value to 6000 or something higher.
Update
Lost connection to MySQL server
There are three likely causes for this error message.
Usually it indicates network connectivity trouble and you should check
the condition of your network if this error occurs frequently. If the
error message includes “during query,” this is probably the case you
are experiencing.
Sometimes the “during query” form happens when millions of rows are
being sent as part of one or more queries. If you know that this is
happening, you should try increasing net_read_timeout from its default
of 30 seconds to 60 seconds or longer, sufficient for the data
transfer to complete.
More rarely, it can happen when the client is attempting the initial
connection to the server. In this case, if your connect_timeout value
is set to only a few seconds, you may be able to resolve the problem
by increasing it to ten seconds, perhaps more if you have a very long
distance or slow connection. You can determine whether you are
experiencing this more uncommon cause by using SHOW GLOBAL STATUS LIKE
'Aborted_connects'. It will increase by one for each initial
connection attempt that the server aborts. You may see “reading
authorization packet” as part of the error message; if so, that also
suggests that this is the solution that you need.
If the cause is none of those just described, you may be experiencing
a problem with BLOB values that are larger than max_allowed_packet,
which can cause this error with some clients. Sometime you may see an
ER_NET_PACKET_TOO_LARGE error, and that confirms that you need to
increase max_allowed_packet.
Doc link: Error lost connection
and also check here

MySQL 5.5 : "Got an error reading communication packets"

I just upgraded MySQL from 5.1 to 5.5.
I fixed few issues running mysql_upgrade, and changing some deprecated configurations...
I also updated PHP, from 5.3.3-7 to 5.3.29-1.
But, since that, I'm having a reccurent problem (always thrown in this order) :
1. Client* - PHP Warning
Warning: Packets out of order. Expected 1 received 0. Packet size=1 in
/home/www/www.mywebsite.com/shared/vendor/doctrine/dbal/lib/Doctrine/DBAL/Connection.php
line 694
2. Client* - PHP Warning
Warning: PDOStatement::execute() [pdostatement.execute]: Error reading
result set's header in
/home/www/www.mywebsite.com/shared/vendor/doctrine/dbal/lib/Doctrine/DBAL/Connection.php
line 694
3. Server* - MySQL Warning :
150127 17:25:15 [Warning] Aborted connection 309 to db:
'my_database' user: 'root' host: '127.0.0.1' (Got an error
reading communication packets)
4. Client* - PHP Error
PDOStatement::execute() [pdostatement.execute]: MySQL server
has gone away in
/home/www/www.mywebsite.com/shared/vendor/doctrine/dbal/lib/Doctrine/DBAL/Connection.php
line 694
*NB: What I call "Client" is the PHP Application, and "Server" is the MySQL Server, even if they're both on the same localhost Server.
So, apparently, the origin of all those problems is the first one : "Packets out of order".
But when I search for this error I can't find many answers, and they are most of the time not related to my problem : I use Doctrine as an abstraction, so I don't write any query or fetch any result myself. Plus, it's almost never the same values as me, but in my case I always get those values ("Expected 1 received 0. Packet size=1").
The closest result would be this MySQL bug report, but "No feedback was provided for this bug for over a month, so it is
being suspended automatically"...
Plus, some of the "2." errors aren't thrown by my PHP Doctrine code (they're not executed from my localhost, but from another known external service, probably using some old PHP Propel code).
So that might mean there is a problem with my MySQL configuration itself, but I tried changing some parameters without obtaining any obvious effect (sometimes it takes more time after restarting MySQL to get the first errors for example).
Any help would be very much appreciated !
And here is my current configuration (I've got 2 MySQL instances, the second one using replication is mostly for read only).
I also checked most of the system resources with Munin and didn't see anything abnormal (the RAM usage for example is pretty high, but as there is 50Go on the server it's not full at all).
UPDATE
I isolated an SQL query that was repeatedly failing from my PHP Client. When I executed from my local with MySQL Workbench, it did exactly the same (closed the connexion with a MySQL server has gone away message). When I did it from the sql command line it also did the same. Then I executed it from the sql command line on the server host, and it succeded. But some time after when I tried again from Workbench/whatever it worked... So it looks like those "corrupted packets" are cached and disapear after some time.
Thanks, I fixed this issue doing :
RESET QUERY CACHE;
FLUSH QUERY CACHE;

Node.js and MySQL "Too many connections" error

I'm using Node.js to run a web-server for my web application. I'm also using the node-mysql module to interface with a MySQL server for all my persistent database needs.
Whenever there is a critical error within my Node.js application that crashes my app's process I get an email sent to me. So, I keep getting this email with an error saying "Too many connections". Here's an example of the error:
Error: Too many connections
at Function.Client._packetToUserObject (/apps/x/node_modules/mysql/lib/client.js:394:11)
at Client._handlePacket (/apps/x/node_modules/mysql/lib/client.js:307:43)
at Parser.EventEmitter.emit (events.js:96:17)
at Parser.write.emitPacket (/apps/x/node_modules/mysql/lib/parser.js:71:14)
at Parser.write (/apps/x/node_modules/mysql/lib/parser.js:576:7)
at Socket.EventEmitter.emit (events.js:96:17)
at TCP.onread (net.js:396:14)
As you can see all it tells me is that the error is coming from the mysql module, but it doesn't tell me where in my application code the issue is occurring.
My application opens a db connection anytime I need to run one or more queries. I immediately close the connection after all my queries and data has been collected. So, I don't understand how I could be exceeding the 151 max_connections limit.
Unless there is a place in my code where I forgot to call db.end() to close the connection, I don't see how my app would leak like this. Even if there was such a mistake, I wouldn't get these emails sent by the dozens. Yesterday, I received almost 100 emails with roughly the same error. How could this be happening? If my application had leaked and allocated connections over time, as soon as the first error occurred the app process would crash and all connections would be lost, preventing the app to crash again. Since I received ~100 emails, this means the app crashed ~100 times, and all within a short period of time. This could only mean that somewhere in my application a lot of connections where established in a short period of time, right?
How could I avoid this problem? This is very discouraging. All help is highly appreciated. Thanks
MySQL has a default MAX_CONNECTIONS = '100' not 151 unless you changed it. Also, in truth you have MAX_CONNECTIONS + 1. The plus 1 allows a root user to logon even after you have maxed out the conenctions in order to figure out what is actually being used. When your connections are maxed out try logging on as root and running the following command from MySQL.
mysql> SHOW FULL PROCESSLIST
Post the output of this command above. Once you actually know what is consuming your resources you can go about fixing it.It could easily be your code that is leaving open connections.
You should take a look at the follwoing documentation: Show Processlist
+1 for question. Investigations showed us that node-mysql opens the connections and doesn't close them. Because of that at one moment be reach the limit of max connections. The question is why node-mysql doesn't close the connections?

"Failed Attempt" in MySQL Connection

I am confused with MySQL connections. I have site that receives heavy requests during working hours. I use PHP to connect to MySQL database using persistant connection.
Few weeks back, I increased mysql connections to 500 that crashed my server then I put it back to 150.
Now users complaints that sometimes they cannot get on the site. I believe that this is due to limited connections.
Can you please give me some information that whether I use persistant or non-persistant? What sections of mysql do I need to tune to get optimized connection processing?
I have attached a screenshot that shows 11K Failed Attempts.
http://i.stack.imgur.com/GkxHP.jpg
Thank you so much...
Update Dec 17, 2011
When I asked this question, I changed the connection type to "non-persistant" and everything starts working fine. Today I surprised to see that the stats from phpmyadmin. Below are the values given by Phpmyadmin:
max. concurrent connections :: 16
Failed Attempts :: 43k
Please suggest some possible solutions? Which parameter should be optimized to avoid/minimize Failed attempts?
High traffic sites should not use persistent connections. I changed DB connection from persistent to non-persistent in php and problem solved!
Thanks for your help.
EDIT:
After changing connection type to non-persistent, don't forget to increase number of connections. In my case, I increased them to 500 with type set to non-persistent and that solved the issue.

"MySQL server has gone away" with Ruby on Rails

After our Ruby on Rails application has run for a while, it starts throwing 500s with "MySQL server has gone away". Often this happens overnight. It's started doing this recently, with no obvious change in our server configuration.
Mysql::Error: MySQL server has gone away: SELECT * FROM `widgets`
Restarting the mongrels (not the MySQL server) fixes it.
How can we fix this?
Ruby on Rails 2.3 has a reconnect option for your database connection:
production:
# Your settings
reconnect: true
See:
Ruby on Rails 2.3 Release Notes, sub section 4.8 Reconnecting MySQL Connections.
MySQL auto-reconnect revisited
Good luck!
This is probably caused by the persistent connections to MySQL going away (time out is likely if it's happening over night) and Ruby on Rails is failing to restore the connection, which it should be doing by default:
In the file vendor/rails/actionpack/lib/action_controller/dispatcher.rb is the code:
if defined?(ActiveRecord)
before_dispatch { ActiveRecord::Base.verify_active_connections! }
to_prepare(:activerecord_instantiate_observers) {ActiveRecord::Base.instantiate_observers }
end
The method verify_active_connections! performs several actions, one of which is to recreate any expired connections.
The most likely cause of this error is that this is because a monkey patch has redefined the dispatcher to not call verify_active_connections!, or verify_active_connections! has been changed, etc.
Try ActiveRecord::Base.connection.verify! in Ruby on Rails 4. Verify pings the server and reconnects if it is not connected.
I had this problem when sending really large statements to MySQL. MySQL limits the size of statements and will close the connection if you go over the limit.
set global max_allowed_packet = 1048576; # 2^20 bytes (1 MB) was enough in my case
As the other contributors to this thread have said, it is most likely that MySQL server has closed the connection to your Ruby on Rails application because of inactivity. The default timeout is 28800 seconds, or 8 hours.
set-variable = wait_timeout=86400
Adding this line to your /etc/my.cnf will raise the timeout to 24 hours
http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#option_mysqld_wait_timeout.
Although the documentation doesn't indicate it, a value of 0 may disable the timeout completely, but you would need to experiment as this is just speculation.
There are however three other situations that I know of that can generate that error. The first is the MySQL server being restarted. This will obviously drop all the connections, but as the MySQL client is passive, and this won't be noticed till you do the next query.
The second condition is if someone kills your query from the MySQL command line, and this also drops the connection, because it could leave the client in an undefined state.
The last is if your MySQL server restarts itself due to a fatal internal error. That is, if you are doing a simple query against a table and instantly see 'MySQL has gone away', I'd take a close look at your server's logs to check for hardware error, or database corruption.
First, determine the max_connections in MySQL:
show variables like "max_connections";
You need to make sure that the number of connections you're making in your Ruby on Rails application is less than the maximum allowed number of connections. Note that extra connections can be coming from your cron jobs, delayed_job processes (each would have the same pool size in your database.yml), etc.
Monitor the SQL connections as you go through your application, run processes, etc. by doing the following in MySQL:
show status where variable_name = 'Threads_connected';
You might want to consider closing connections after a Thread finishes execution as database connections do not get closed automatically (I think this is less of an issue with Ruby on Rails 4 applications Reaper):
Thread.new do
begin
# Thread work here
ensure
begin
if (ActiveRecord::Base.connection && ActiveRecord::Base.connection.active?)
ActiveRecord::Base.connection.close
end
rescue
end
end
end
The connection to the MySQL server is probably timing out.
You should be able to increase the timeout in MySQL, but for a proper fix, have your code check that the database connection is still alive, and re-connect if it's not.
Using reconnect: true in the database.yml will cause the database connection to be re-established AFTER the ActiveRecord::StatementInvalid error is raised (As Dave Cheney mentioned).
Unfortunately adding a retry on the database operation seemed necessary to guard against the connection timeout:
begin
do_some_active_record_operation
rescue ActiveRecord::StatementInvalid => e
Rails.logger.debug("Got statement invalid #{e.message} ... trying again")
# Second attempt, now that db connection is re-established
do_some_active_record_operation
end
Do you monitor the number of open MySQL connections or threads? What is your mysql.ini settings for max_connections?
mysql> show status;
Look at Connections, Max_used_connections, Threads_connected, and Threads_created.
You may need to increase the limits in your MySQL configuration, or perhaps rails is not closing the connection properly*.
Note: I've only used Ruby on Rails briefly...
The MySQL documentation for server status is in http://dev.mysql.com/doc/refman/5.0/en/server-status-variables.html.
Something else to check is Unicorn config is correct. See before_fork and after_fork handling of ActiveRecord connection here: https://gist.github.com/nebiros/2776085#file-unicorn-rb
I had this problem in a Ruby on Rails 3 application, using the mysql2 gem. I copied out the offending query and tried running it in MySQL directly, and I got the same error, "MySQL server has gone away.".
The query in question was very, very large. A very large insert (+1 MB). The field I was trying to insert into was a TEXT column and their max size is 64 KB. Rather than throwing an errorm, the connection went away.
I increased the size of the field and got the same thing, so I'm still not sure what the exact issue was. The point is that it was in the database due to some strange query. Anyway!
While forking in Rails.
For anyone running into this while forking in Rails, try clearing the existing connections before forking and then establish a new connection for each fork, like this:
# Clear existing connections before forking to ensure they do not get inherited.
::ActiveRecord::Base.clear_all_connections!
fork do
# Establish a new connection for each fork.
::ActiveRecord::Base.establish_connection
# The rest of the code for each fork...
end
See this StackOverflow answer here: https://stackoverflow.com/a/8915353/293280