I have identified a slow query running on one of our pages from the mysql-slow.log. This is fine, however, it is extremely slow on our production server compared to on our development server.
Both are running the same version of MySQL, ubuntu, apache etc. The difference between them is mainly that the Production server has got more memory and more cpu cores available than the development server have.
The other difference is ofcourse that the Production server has got way more traffic over multiple websites / databases than the development server.
This apart, both databases are equal on the production server and the development server. However, the slow query takes over 12 seconds on the prod, and between 2-3 seconds on our development server.
Watching htop on the prod server shows that we have lots of cpu capacity available and lots of memory, so it shouldn't be the load.
I am now clueless on where to continue my debugging to find out why MySQL is so slow on our prod server. Anyone got any tips?
Related
We've just moved to a new server, both run ubuntu 14.04LTS, the only difference is basically that the old server ran mysql5.5, the new one has mysql5.6. Both servers are cloud machines hosted by digitalocean. Both operate on default my.cnf settings, not much has been tweaked.
An other important difference is, that the new server has double the RAM, and CPU power.
Still - while old server ran with an avg of 0.6 second response time for an api call we monitor server health with, the new one is 1.6-1.8 slower. Yes, they contain heavy joins, but that's not my point - the codebase is exactly the same, and the machine itself is supposed to be stronger. New server also shows peaks of CPU usage few times every hour, which never happened with mysql 5.5.
Does this make any sense? For me, not so much, but I'm no MySQL guru.
Ran MySQL Tuner, but unsure if there's anything relevant within:
mysqltuner output for OLD server:
http://pastebin.com/cqSSssW0
mysqltuner output for NEW server:
http://pastebin.com/uk3g1KZa
The only thing that has been tweaked in my.cnf is that it should log slow queries.
Any idea, why this could happen? MySQL5.6 clearly runs faster on benchmarks I saw online. Any help is very much appreciated.
I have problem, when MySQL thread sometime stuck at status "Writing to net".
I have 4 Apache server (2.4) (requests are load-balanced on them) a 1 MySQL (MariaDB 10). Apache is executing php56. All Apache servers have same configuration. All servers runs on CentOS 7. SElinux is disabled on Apache servers for debug reasons. No problems in audit logs on DB server. All servers are virtual and located on same cluster (VMware).
Problem appear only on specific pages and specific queries to DB.
Usually there is around 100-200 separate queries on page and most of them takes 0.0001-0.0010 s. But then I have one query that takes around 1-2sec. The query itself take much lesser time (around 0.0045s).
Problematic query returns around 8984 rows and when executed from CLI from debug script, it is executed fast as expected.
Strange is that in time some Apache servers execute that page quickly, and some slowly. It changes (during day). Also I tried remove one Apache server from cluster and then send same request. If server is not under any load, it usually responds fast.
All server have enough resources (CPU and RAM) so it is definitely not load issue. They usually have around 4-10 active Apache workers (prefork) and have capacity for 100 active workers.
I tried debugging with tcpdump and when requesting page, I can see packet flow for fast queries and then it stops for a while and resumes. Not sure if the problem is on MySQL server or on Apache server.
My guess is that I am hitting some kind of limit, but I have no idea which one.
The solution is quite odd.
First few more details:
All Apache severs have same application data (PHP files, images, etc.) Mounted from NFS. The NFS share was working fine (low latency, no data corruption).
Solution:
When I was desperate I went through every possible log. Then I noticed that iptables are dropping some packets from NFS server. Well I said to myself that I should probably fix that, even when its not related.
But after I allowed all traffic from NFS to my Apache servers, MySQL status "writing to net" disappeared and all websites started to respond quickly.
I have a a query which only takes 0.004s on my development machine (Windows 7 running WampServer on an HDD) but takes 0.057s on my production server (CENTOS 6.5 running on an SSD) -- a difference of 14x. Both MySQL versions are the same.
The explain results are identical on both servers, as are the databases (I exported the database from my production server and imported it into my development machine). I also ran optimize table on both servers, and tried putting in SQL_NO_CACHE, but that didn't make a difference on either one.
Navicat shows this under the Profile tab:
Production
Development
The execution times for the queries are consistent on both servers.
The database versions are the same, the content is the same, and the explain results are the same. Is there any way to determine why the query is taking 14x longer on my production server?
EDIT: In an attempt to determine if the MySQL server is under load, I found the Process List area in Navicat and can see that there are only a few processes, all of which are for "Sleep" commands. So I don't think the production server is under any load.
The production server seems to be slower in every parameter listed. There could be many factors involved, so you should check each one:
First of all, check if there is any other load on the production server. Is the server doing something else in the meanwhile? Use Linux command top to see running process and check if any of them is using a lot of computing power. Use the MySQL command SHOW STATUS to get info about the MySQL server status (memory, open tables, current connections, etc.)
Check the hardware: nowadays some desktop PCs are more powerful than cheap virtual servers (CPU, RAM frequency and access times, ...)
MySQL could use different settings in the two environments
Make sure you have the same indexes on both databases.
I have found out that MySQL on EC2 (Ubuntu 12.10) could be extremely slow.
It takes just 700ms for a certain set of SQL queries to perform on my local PC (Windows 7), whereas on EC2 it requires more than 13sec.
The database is very small, just 12MB. There is almost no disk IO during the query.
Nevertheless, EC2 instance is 20 times slower.
All the databases are based on the same dump: same tables and same indexes. The queries return the same results.
The only difference is the execution time.
I tried M1.small and M2.xlarge (which has 7 times more computing power than M1.small) - the outcome is the same: queries take almost the same time on both servers and both are extremely slow.
Why could this happen?
The problem was with MySQL 5.5 which executes subqueries inefficiently.
My home PC run MySQL 5.6 which is far better in this regard.
So I upgraded MySQL on EC2 to version 5.6 and it became almost as fast as my home PC (as far as only one simultaneous query is concerned).
Is it normal for mysql to be slow when connecting to a remote host or should it have the same performance as connecting to a local host?
I noticed a small performance difference, when I tried to connect to a remote host, so I'm wondering if that's normal?
Assuming that the remote machine is equal in terms of processing power as your local machine, then the primary difference in speed should be network latency - the round trip time for a network traffic. If you are sending huge amounts of data (e.g., reading or writing large BLOBs), then the network bandwidth can come into play as well and "slow" things down. But in general, the round trip cost is often the biggest factor. If you are executing a large number of "small" queries, this cost difference can be fairly significant when comparing a local connection to a remote connection.
Out of curiosity, I just now ran a test that I had already built that simply runs a bunch of update queries. This is not using MySQL but another client/server DBMS. Thus the results would likely be different, but the idea is the same and I would imagine the relative differences would not be significantly different.
Local host (using IPC comm): 5.3 seconds
Remote host (UDP comm): 20.2 seconds
This involved about 50,000 operations. The remote host was 2 hops away on the LAN with (if I measured it correctly) a round trip latency of approximately 0.25 ms for a packet with a 1 byte payload.
It depends entirely on the network connection between the program and the MySQL database server. A slow network will make the database appear slow.
I'd expect a "small performance difference" (as you described it) to be normal for a remote connection.
By default the MySQL server will perform a reverse DNS lookup the first time a client connects to it. It then stores this in its cache. This can potentially give a performance hit depending on the speed of the reverse DNS resolution.
It can depend on how many MySQL queries you're doing: Slow MySQL Remote Connection
You can optimize your code by converting many small queries into larger ones.