Wordpress sends lots of sleep sql query to mysql - mysql

My server got stucked last night because of database connection error.
I investigated it is caused by too many database connections. After a research from google and stackoverflow, I didn't get any useful information. While I am trying to investigate all plugins one by one to see if any of them has a bug or something did this, I would like to ask your helps..
First of all, when I logged in to MySQL I can see a lot of SLEEP queries with NULL info there. I tried to use command line to kill all sleep queries but there still more requests fill all connections right away.
The weird thing is, the apache server is not actually getting high volumn of requests. I am actually using AWS RDS as my database server so the apache and mysql is not on same server. The RDS server doesn't have public access so I am sure all requests are only from my apache server. The cpu usage on apache server is not high. Also, I searched the apache's access_log there are not a lot requests at that time. And I cannot find anything wrong with these requests. Especially there is no requests is performing injection attack. I think it is possible some thing triggered in the code so I searched 'SLEEP' in all my code but can only find some in the w3 total cache plugin, which the code blocks in this plugin is not easily get reached..I turned off the XML-RPC in apache level so it shouldn't be the XML-RPC attack.
I know there are a lots of possibility since I am using about twenty plugins in my site, but it is really weird I cannot find any possible requests caused this on apache level. Is it possible any requests can hit the server without being recorded in access_log?
I am pretty new to configure apache and mysql on my own and still learning these features..Thanks in advance for helping me!

Related

MySQL hanging in Writing to Net

I have problem, when MySQL thread sometime stuck at status "Writing to net".
I have 4 Apache server (2.4) (requests are load-balanced on them) a 1 MySQL (MariaDB 10). Apache is executing php56. All Apache servers have same configuration. All servers runs on CentOS 7. SElinux is disabled on Apache servers for debug reasons. No problems in audit logs on DB server. All servers are virtual and located on same cluster (VMware).
Problem appear only on specific pages and specific queries to DB.
Usually there is around 100-200 separate queries on page and most of them takes 0.0001-0.0010 s. But then I have one query that takes around 1-2sec. The query itself take much lesser time (around 0.0045s).
Problematic query returns around 8984 rows and when executed from CLI from debug script, it is executed fast as expected.
Strange is that in time some Apache servers execute that page quickly, and some slowly. It changes (during day). Also I tried remove one Apache server from cluster and then send same request. If server is not under any load, it usually responds fast.
All server have enough resources (CPU and RAM) so it is definitely not load issue. They usually have around 4-10 active Apache workers (prefork) and have capacity for 100 active workers.
I tried debugging with tcpdump and when requesting page, I can see packet flow for fast queries and then it stops for a while and resumes. Not sure if the problem is on MySQL server or on Apache server.
My guess is that I am hitting some kind of limit, but I have no idea which one.
The solution is quite odd.
First few more details:
All Apache severs have same application data (PHP files, images, etc.) Mounted from NFS. The NFS share was working fine (low latency, no data corruption).
Solution:
When I was desperate I went through every possible log. Then I noticed that iptables are dropping some packets from NFS server. Well I said to myself that I should probably fix that, even when its not related.
But after I allowed all traffic from NFS to my Apache servers, MySQL status "writing to net" disappeared and all websites started to respond quickly.

MySQL database is loading very slow

I have a wordpress installation which has over 5000 posts in it. For the last few days, the database is loading very slowly. I have used the database optimization plugin as well as optimized db tables from back end. But still the issue exist. After I restart the MySQL server everything seems to work just fine. The same issue exists for my other website which is a Joomla installation on another server(Amazon) and has over 50,000 articles. Even here I regularly do database optimization but still the pages load very slowly. Sometimes to over 1 minute. There is page cache on both the sites, still I am getting this issue. There are other websites also running on the same server but they are relatively new and less content compared to these two sites, they load very fast. The problem is with these two websites only.
Check out:
Connection pool. Maybe you're running out of connections and get bottlenecked
Server's cache
slow queries audit
... this are just hints, to get real help you should indicate more system performance's info
I'm not sure about the Wordpress site configuration settings, but for Joomla, if you are not already running MySQLi, I would recommend enabling in in the Global Configuration. Try clearing the site cache and if you are running on Joomla 1.5, then if possible, upgrade to Joomla 2.5 as it is a little faster when running queries. In the end, it might just be due to the fact that you are on a shared host, so if you are willing to, a VPS server would speed things up.
There can be many reason for the same like : "Hosting Server Issue (use dedicated server for such sites)" , "Use good cache plugin availabe for joomla & wordpress" , "Use db optimize plugins"
You can also refer to following sites -
http://www.sparringmind.com/speed-up-wordpress/
http://wpmu.org/too-many-wordpress-plugins/
http://www.joomlaperformance.com/articles/performance/so_you_want_to_speed_up_joomla_3_14.html

Mysql have suddenly started regularly opening unsuccessful sockets

I've desperately tried to figure out what's happened here, but haven't seen this particular problem anywhere. I've 'inherited' (as in, not built any of it myself) management of a database server (remote, in a data warehouse, accessed by ssh) where some php daemons are running on a Linux server acting as data crawlers, inserting and processing information in a relatively steady stream into mysql.
A couple of days ago, the server crashed and came back on again. I logged in an restarted the mysql server and the crawlers, thinking no more of it. A day and a half later, the mysql server stopped working, and I couldn't diagnose it since I couldn't log into it, nor did it respond to "/etc/init.d/mysql stop" or varieties thereof. According to the log file, it kept throwing errors very regularly (once every four minutes and 16 seconds) and said that it had too many file handlers open. When I shut down the crawlers, however, I could log in again, but mysql kept throwing the errors. I checked lsof and it showed a lot of open sockets with "can't identify protocol" error.
mysqld 28843 mysql 1990u sock 0,4 2856488 can't identify protocol
mysqld 28843 mysql 1989u sock 0,4 2857220 can't identify protocol
^Thousands of these rows
I thought it was something the crawlers had done, and I restarted mysql and the failed sockets disappeared. But I was surprised to see that mysql kept opening new ones, even when the crawlers weren't running. It did this very regularly, about two new failed sockets a minute, regardless of whether the crawlers were active or not. I increased the maximum amount of filehandlers allowed for mysql to buy some time, but I'm obviously looking for a diagnosis and permanent solution.
All descriptions of such errors (socket leaks) that I've found on forums seems to be about your own software leaking, not closing its sockets. But this seems to be mysql itself that does it, and there has been no change in any of the code from when it worked fine, just a server crash and restart.
Any ideas?

Mysql resource temporarily unavailable

I'm seeing a few of these errors during high load times:
mysql_connect() [<a
href='function.mysql-connect'>function.mysql-connect</a>]: [2002] Resource
temporarily unavailable (trying to connect via
unix:///var/lib/mysql/mysql.sock)
From what I can tell the mysql server isn't hitting its max connections limit, but there's something else stopping it from serving the query. What other limits would MySQL be hitting?
I'm running RHEL 6.2 64bit with MySQL 5.5.21
Let's assume your system is currently Unix-based (as given in your problem statement). If this is correct, here's the set of issues you may be running into:
You've run out of memory available to MySQL.
This is the most likely problem you're facing. Each connection in MySQL's connection pool requires memory to function, and if this resource is exhausted, no further connections can be made. Of course, the memory footprints and maximum packet sizes of various operations can be tuned in your equivalent to my.cnf if you discover this to be an issue.
Here's an additional thread that can help there, but you may also consider using simpler profiling tools like top to get a good ballpark estimate of what's going on.
You've run out of file descriptors available to your MySQL user account.
Another common issue: if you're trying to service requests that require file IO above the 1,024 boundary (by default), you will run into cases where the operation simply fails. This is because most systems specify a soft and hard limit on the number of open file descriptors each user can have available at one time, and walking over this threshold can cause problems.
This will usually have a series of glaringly obvious signs expressed in your log files. Check /var/log/messages and your comparable directories (for example, /var/log/mysql to see if you can find anything interesting.
You've run into a livelock or deadlock scenario where your thread is unsatisfiable.
Corollary to memory and file descriptor exhaustion, threads can time out if you've overstepped the computational load your system is capable of handling. It won't throw this error message, but this is something to watch out for in the future.
Your system is running out of PIDs available to fork.
Another common scenario: fork only has so many PIDs available for its use at any given time. If your system is simply overforked, it will cease to be able to service requests.
The easiest check for this is to see if any other services can connect through to the machine. For example, trying to SSH into the box and discovering that you cannot is a big clue.
An upstream proxy or connection manager has run out of resources and ceased servicing requests.
If you have any service layer between your client and MySQL, it bears inspecting to see if it has crashed, hung, or otherwise become unstable. The advice above applies.
Your port mapper has exhausted itself after 65,536 connections.
Unlikely, but again, a possible exhaustion case. Checking the trivial service connection as above is, ehm, also the best port of call here.
In short: this is a resource exhaustion scenario, inclusive of the server simply being "down". You're going to have to profile your system further to see what you're blocking on. All the error message gives us in this case is the fact the resource is unavailable to the client -- we'd need to see more information about the server to determine a more adequate remedy.
I still haven't found which limits it was hitting, but I did manage to work around the problem. There was a problem with our session table (in vbulletin) which uses the MEMORY engine. The indexes for this table were HASH and thus when vbulletin purged this table once an hour it would lock the table just long enough to hold up other queries and push mysql to the limit of its resources.
By changing the indexes to BTREE this allowed MySQL to delete the rows from the session table a lot quicker and avoid any limits there were reached previously. The errors only started when we upgraded our master db server to MySQL 5.5, so I'm guessing MEMORY tables are handled differently in the latest release.
See http://www.mysqlperformanceblog.com/2008/02/01/performance-gotcha-of-mysql-memory-tables/ for information on speed increases from using BTREE indexes over HASH For MEMORY.
Geez, this could be so many things. It could be that the socket buffer space is exhausted. It could be that mysql is not accepting connections as fast as they are coming in and the backlog limit is reached (though I'd expect that to give you a "Connection Refused" error, I don't know for sure that's what you'll get for a Unix domain socket). It could be any of the things #MrGomez pointed out.
Since you are running Apache and MySQL on the same server and this is a problem under high load, it could well be that Apache is starving the system of some resource and you're just not seeing (noticing?) the dropped/failed incoming connections/requests in your logs.
Are you using connection pooling? If not, I'd start there.
I'd also look for errors in the Apache logs and syslog around the same time as the mysql_connect error and see what else turns up. I'd especially recommend getting MySQL moved over to its own separate dedicated server.
In my case, I was working with JSON data types with PDO (PHP Driver).
I was using fetch to retrieve one item but forgot to add LIMIT 1 to the query. Adding it solved the problem.

PHP App (ExpressionEngine) slow to load on IIS 7, MySQL, FastCGI

I'm working on debugging a slowness issue I've got with running ExpressionEngine (a PHP application) on IIS 7.
I don't think this is actually an issue with ExpressionEngine, but rather an issue with my PHP/MySQL setup.
The problem shows itself thusly:
Go to webite address
IE "spins" for 10-15 seconds, waiting to load. During this time:
processor usage is minimal on the server, and PHP's process is inactive
I see a connection for the site user in MySQL, but the thread is in "sleeping" more.
There is plenty of free memory on the server
pretty much, the server is doing nothing
After 10-15 seconds, I see the connection MySQL run some really quick queries (very fast) and the site loads in under a second.
This is a fairly complex site, but it doesn't make any sense that the whole system is just sitting there waiting for 10 seconds - not processing anything. I'm using FastCGI on IIS7, which seems to be working fine, and to me this seems like some sort of a timeout issue where FastCGI, PHP, or maybe even MySQL is waiting for something, not getting it, and after the timeout occurs, continuing to process.
Anyone had similar experiences?
Thanks!
P.S. - I should also add that the database (MySQL) and PHP are running on the same server.
mysql might be trying to do a reverse DNS on the connection from the web server. if you don't need to filter mysql connections by DNS name, then add skip-name-resolve to your mysql configuration file
Twitter plugins cause lots of issues in CMS systems. usually there is a function that checks if twitter is there/alive. When twitter gets busy, this slows down the system (thats why it can appear as intermittent) Find the twitter plugin, and the routine that checks if twitter is there. Comment out this code and return true (ie. don't ask twitter if it is there, just assume it is)
I have no idea why, but the solution to this was to install PHP 5.3. I had PHP 5.2.10 running, and I guess 5.3 added some extensive optimizations for Windows. Or fixed some other weird problem - who knows.
Actually, after some further digging, it appears the problem was with the Twitter plugin. It waits 25 seconds to come back from Twitter sometime with an error. Maybe this is related to the DNS?