My development system has suddenly been afflicted with this weird problem where every single SQL script takes exactly 31 seconds to execute on my Classic ASP site's connection to a mySQL (MariaDB) database.
Connecting to either a local copy of the DB running off my system or even my live DB being hosted at a web host, it all the same.
Everything from a simple
adoconn.Execute("SELECT * FROM users;")
or even
adoconn.Execute("SET sql_mode''")
would take 31 seconds to execute. Each!
I can safely rule out any problems with the DB as connecting to it and running scripts from DBeaver shows no problems at all. The results come back instantly.
I can also rule out network problems as the local DB and the hosted DB have the same results and I have used WireShark to confirm that the MySQL packets are being responded to almost immediately from the hosted DB.
Debug stepping through my ASP code, everything runs fine right up until the .Execute() at which it will take 31 seconds, regardless of how complex the script is.
The strangest thing is, this problem just came out of the blue; when my system was powered down, disconnected and untouched over the weekend. No updates, installations or changes were done to the system. Friday I was doing my dev work perfectly fine. But Monday morning when I powered it back up, the DB connections there are stuffed.
I've already tried configuring mySQL to use the "skip-name-resolve" and "bind-address = ::" settings.
I have tried rebuilding my IIS websites and reinstalling IIS itself.
I've also reinstalled mySQL ODBC drivers on my system to no avail.
What is going on here?
As it turns out, the cause of this whole issue was the McAfee software that came pre-installed in my Dell laptop.
No, I did disable the firewall and antivirus, mind you.
Those were the first steps I did and triple-checked routinely during my testing. Both McAfee's firewall and auto-protection were all fully disabled.
But apparently, McAfee, ignores this setting and was screwing my DB connections over ODBC.
This problem finally only came to an end when I fully uninstalled this McAfee malware. There's no other way to describe it.
Let this post be a warning to anyone else naively believing this malware to be anything else.
Related
Following a recent upgrade to Windows 10 my XAMPP didn't seem to want to work (neither Apache or MySQL would start). So I upgraded that too to XAMPP for Windows 5.6.12. There were a few port issues initially (due to new? services in Windows 10) but once those were fixed I have both Apache and MySQL running.
However, now the php pages that I am working on, which do a great deal of reading and writing back to a MySQL database, run unbelievably slowly. A page that used to take a minute or two before any upgrade now takes about 30 minutes. I can see that writing to the database is very slow and the hard disc is always sitting at around 90 to 100%. I have tried many suggested changes: stopping various services, changing the page size etc but it still runs very slowly. I have checked the event log but there is nothing that stands out as an issue.
I am not sure if it the upgrade to Windows 10 or XAMPP that has done it, and I have run out of ideas. I realise this may sound a bit vague and I am happy to post logs etc but I am not sure whether there is a simple reason for this, or something simple for me to check.
I have problem, when MySQL thread sometime stuck at status "Writing to net".
I have 4 Apache server (2.4) (requests are load-balanced on them) a 1 MySQL (MariaDB 10). Apache is executing php56. All Apache servers have same configuration. All servers runs on CentOS 7. SElinux is disabled on Apache servers for debug reasons. No problems in audit logs on DB server. All servers are virtual and located on same cluster (VMware).
Problem appear only on specific pages and specific queries to DB.
Usually there is around 100-200 separate queries on page and most of them takes 0.0001-0.0010 s. But then I have one query that takes around 1-2sec. The query itself take much lesser time (around 0.0045s).
Problematic query returns around 8984 rows and when executed from CLI from debug script, it is executed fast as expected.
Strange is that in time some Apache servers execute that page quickly, and some slowly. It changes (during day). Also I tried remove one Apache server from cluster and then send same request. If server is not under any load, it usually responds fast.
All server have enough resources (CPU and RAM) so it is definitely not load issue. They usually have around 4-10 active Apache workers (prefork) and have capacity for 100 active workers.
I tried debugging with tcpdump and when requesting page, I can see packet flow for fast queries and then it stops for a while and resumes. Not sure if the problem is on MySQL server or on Apache server.
My guess is that I am hitting some kind of limit, but I have no idea which one.
The solution is quite odd.
First few more details:
All Apache severs have same application data (PHP files, images, etc.) Mounted from NFS. The NFS share was working fine (low latency, no data corruption).
Solution:
When I was desperate I went through every possible log. Then I noticed that iptables are dropping some packets from NFS server. Well I said to myself that I should probably fix that, even when its not related.
But after I allowed all traffic from NFS to my Apache servers, MySQL status "writing to net" disappeared and all websites started to respond quickly.
I am using phpMyAdmin, it run very slowly. For everything I do, I see loading dialogs in about 5-10 seconds. When I use phpMyAdmin, Apache use around 60% CPU.
When phpMyAdmin do something, Apache for one second changes ports to 55617 and 55618.
In my apps communication with database is not slow as in phpMyAdmin and Apache use CPU under 10% of.
Why?
EDIT: SOLVED. I remembered when it started and i think it started when i enable XDebug. I set xdebug.profiler_enable = 0 and it works good.
Enabling XDebug in XAMPP slows down phpmyadmin
I've desperately tried to figure out what's happened here, but haven't seen this particular problem anywhere. I've 'inherited' (as in, not built any of it myself) management of a database server (remote, in a data warehouse, accessed by ssh) where some php daemons are running on a Linux server acting as data crawlers, inserting and processing information in a relatively steady stream into mysql.
A couple of days ago, the server crashed and came back on again. I logged in an restarted the mysql server and the crawlers, thinking no more of it. A day and a half later, the mysql server stopped working, and I couldn't diagnose it since I couldn't log into it, nor did it respond to "/etc/init.d/mysql stop" or varieties thereof. According to the log file, it kept throwing errors very regularly (once every four minutes and 16 seconds) and said that it had too many file handlers open. When I shut down the crawlers, however, I could log in again, but mysql kept throwing the errors. I checked lsof and it showed a lot of open sockets with "can't identify protocol" error.
mysqld 28843 mysql 1990u sock 0,4 2856488 can't identify protocol
mysqld 28843 mysql 1989u sock 0,4 2857220 can't identify protocol
^Thousands of these rows
I thought it was something the crawlers had done, and I restarted mysql and the failed sockets disappeared. But I was surprised to see that mysql kept opening new ones, even when the crawlers weren't running. It did this very regularly, about two new failed sockets a minute, regardless of whether the crawlers were active or not. I increased the maximum amount of filehandlers allowed for mysql to buy some time, but I'm obviously looking for a diagnosis and permanent solution.
All descriptions of such errors (socket leaks) that I've found on forums seems to be about your own software leaking, not closing its sockets. But this seems to be mysql itself that does it, and there has been no change in any of the code from when it worked fine, just a server crash and restart.
Any ideas?
I'm working on debugging a slowness issue I've got with running ExpressionEngine (a PHP application) on IIS 7.
I don't think this is actually an issue with ExpressionEngine, but rather an issue with my PHP/MySQL setup.
The problem shows itself thusly:
Go to webite address
IE "spins" for 10-15 seconds, waiting to load. During this time:
processor usage is minimal on the server, and PHP's process is inactive
I see a connection for the site user in MySQL, but the thread is in "sleeping" more.
There is plenty of free memory on the server
pretty much, the server is doing nothing
After 10-15 seconds, I see the connection MySQL run some really quick queries (very fast) and the site loads in under a second.
This is a fairly complex site, but it doesn't make any sense that the whole system is just sitting there waiting for 10 seconds - not processing anything. I'm using FastCGI on IIS7, which seems to be working fine, and to me this seems like some sort of a timeout issue where FastCGI, PHP, or maybe even MySQL is waiting for something, not getting it, and after the timeout occurs, continuing to process.
Anyone had similar experiences?
Thanks!
P.S. - I should also add that the database (MySQL) and PHP are running on the same server.
mysql might be trying to do a reverse DNS on the connection from the web server. if you don't need to filter mysql connections by DNS name, then add skip-name-resolve to your mysql configuration file
Twitter plugins cause lots of issues in CMS systems. usually there is a function that checks if twitter is there/alive. When twitter gets busy, this slows down the system (thats why it can appear as intermittent) Find the twitter plugin, and the routine that checks if twitter is there. Comment out this code and return true (ie. don't ask twitter if it is there, just assume it is)
I have no idea why, but the solution to this was to install PHP 5.3. I had PHP 5.2.10 running, and I guess 5.3 added some extensive optimizations for Windows. Or fixed some other weird problem - who knows.
Actually, after some further digging, it appears the problem was with the Twitter plugin. It waits 25 seconds to come back from Twitter sometime with an error. Maybe this is related to the DNS?