How do you debug slowness in mysql - mysql

I have recently upgraded a system from mysql5.6 to mysql5.7 and I am experienceing significant lose of speed in the app.
Beside going over the available config values one by one and learn them + tweka them, Is there a way to identify/ debug the problem?
Google will give me info on how to use the slow query log, but that wont help, a I am not debugging a specific slow query, but rather a slownes that settms maybe from bad installation/configuration of the entire DB.

Related

WordPress: too many queries to the database

I have just moved a WP installation from one hosting provider to another. Everything went fine except for a problem I have with the new installation. Please note that I have moved from a regular VPS to a kinda powerful and fast dedicated machine.
The thing is that now, the website is slower than when in the previous server. It takes 6-7 seconds to load a page and according to Chrome's Dev Tools network panel, it has a period of 3-4 seconds to the get the first response byte (TTFB), which is insane.
I have tried the following with no success:
Review database for anomalies
Disable all plugins (and delete them)
Disable template (and delete it)
With these last two actions, I lowered the loading time to 5-6 seconds, which is a lot for small site (a few hundreds of posts and 50-60 pages), with no comments enabled. I still have the 3-4 TTFB period.
After that, I installed the Query Monitor plugin and found out that, at every page load, WP performs hundreds (ranging from 400 to 800) database queries and, in some cases, even 1500 database queries. OMG!
Honestly, I am quite lost here. I mean, on one hand I have this strange database behavior I cannot really understand. And on the other hand, I cannot help wondering how it was faster on the previous & slower server.
By the way, I have moved from MySQL to MariaDB, which should be even faster too. Indexes are kept when dumping & importing the file. I am lost. :(
Any help is greatly appreciated. Apologies for my english (not my language) and please let me know if the is some important information missing. I will be glad to provide all the necessary information that help me/us troubleshoot this.
Thanks in advance!
I think you should optimize your MySQL config (my.cnf in Linux or my.ini in Windows). To view problems in MySQL you can try run the script MySQLTuner: https://github.com/major/MySQLTuner-perl.

Options for speeding up slow SQL queries

We're having issues with a few queries - relatively simple queries - that take too long processing. Everything from 3 000ms to 30 000ms. We are using PHP 5.5 and MySQL 5.5.28-29.1.
We have a few options, but I am posting here to see if anyone has any experience on each of them:
Currently we are accessing views to get our data, this was done to move the processing load from the PHP to the MySQL. Would accessing the tables directly improve the query processing speed? I'm thinking not, cause it would lead to a lot more queries, due to the fact that the views are just collations of data.
If we were to install a cache DB, such as SQLite3, to cache it locally, then sync it to a RDBMS, how would we do that? And would the speed improve?
Thinking about a NodeJS version as well, using Node WebKit. As far as I can understand there is npm packages out there that can act as cache or a db connection, which would rule out the need for PHP. But how about the speed?
Another option is to set up a dedicated server for this environment (we're using a virtual server environment for the moment). Which would most likely speed some parts of it up. But if the MySQL will still be slow on that server - it's kind of wasted.
These are the alternatives I can think of at the moment. Any suggestions are appreciated.
(I can post the slow SQL queries if need be, but would like to see if anyone has anything to say about our options first)

Rails: How to split write/read query across master/slave database

My website has a very heavy read traffic. A lot heavier than write traffic.
To improve the performance of my website I have thought of going with master/slave database configuration.
The octupus gem seems to provide what I want, but since my app is huge I can't go though a millions of source code line to change the query distribution(sending read query to slave server and write query to master server).
MySQL Proxy seems to be a great way to resolve this issue but since it is in alpha version I don't want to use it.
So my question is what is the best way to split read/write queries across master/slave server?
Is it possible to split read/write query without using any gems in rails?
I spiked out 2 gems, Octopus and Makara. I have written a blog comparing these 2 gems : https://ypoonawala.wordpress.com/2015/11/15/octopus-vs-makara-read-write-adapters-for-activerecord-2/
In my opinion, Makara works well and makes up for the issues with Octopus.
With octopus gem, you don't have to change much of your code to make write queries go to the master db server and read queries go to the slaves.
It's a simple configuration file, as stated here.
I've tried this in the past and It worked quite well. The only problem for me is that when the slave is down, it doesn't "redirect" the queries to the master db server, as I asked here.
But, if you want to configure each individual query destination, it will take some work.
I would suggest you to start mapping your more frequent queries and those that are taking longer to respond. Knowing those queries, you can optimize them individually. This may already solve part of the problem.
If you still need master-slave replication after that, use the octopus gem to change the behaviour only those few complicated queries.

How to benchmark and optimize a really database-intensive Rails action?

There is an action in the admin section of a client's site, say Admin::Analytics (that I did not build but have to maintain) that compiles site usage analytics by performing a couple dozen, rather intensive database queries. This functionality has always been a bottleneck to application performance whenever the analytics report is being compiled. But, the bottleneck has become so bad lately that, when accessed, the site comes to a screeching halt and hangs indefinitely. Until yesterday I never had a reason to run the "top" command on the server, but doing so I realized that Admin::Analytics#index causes mysqld to spin at upwards of 350+% CPU power on the quad-core, production VPS.
I have downloaded fresh copies of production data and the production log. However, when I access Admin::Analytics#index locally on my development box, while using the production data, it loads in about 10 - 12 seconds (and utilizes ~ 150+% of my dual-core CPU), which sadly is normal. I suppose there could be a discrepancy in mysql settings that has suddenly come into play. Also, a mysqldump of the database is now 531 MB, when it was only 336 MB 28 days ago.  Anyway, I do not have root access on the VPS, so tweaking mysqld performance would be cumbersome, and I would really like to get to the exact cause of this problem. However, the production logs don't contain info. on the queries; they merely report the length that these requests took, which average out to a few minutes apiece (although they seemed to have caused mysqld to stall for much longer than this and prompting me to request our host to reboot mysqld just to get our site back up in one instance).
I suppose I can try upping the log level in production to solicit info. on the database queries being performed by Admin::Analytics#index, but at the same time I'm afraid to replicate this behavior in production because I don't feel like calling our host up to restart mysqld again! This action contains a single database request in its controller, and a couple dozen prepared statements embedded in its view!
How would you proceed to benchmark/diagnose and optimize/fix this action?!
(Aside: Obviously I would like to completely replace this functionality with Google Analytics or a similar solution, but I need fix this problem before proceeding.)
I'd recommend taking a look at this article:
http://axonflux.com/building-and-scaling-a-startup
Particularly, query_reviewer and newrelic have been a life-saver for me.
I appreciate all the help with this, but what turned out to be the fix for this was to implement a couple of indexes on the Analytics table to cater to the queries in this action. A simple Rails migration to add the indexes and the action now loads in less than a second both on my dev box and on prod!

Help! Why did MySql just screech to a halt?

Our relatively high traffic website just screeched to a halt, and we're totally stumped. We run on Django and Mysql (InnoDB), and we're trying to figure out why it's all of a sudden totally slow.
Here's what we know so far:
On our mysql server, a simple query (from django shell) runs fast.
On our app server, a simple query (from django shell) runs very slow.
Without having any details on the query or on the tables involved in the query, it is quite difficult to answer this question.
Most likely it is because of a lot of data in the table and a missing index on the field you are querying.
This would explain why it is slow on the production box, but fast on the dev box (since there's less data).
To answer the question better, could you provide us with more details? Table structure, query, number of rows in the table, etc. ?
More assumptions: Disk I/O on the app server could be a problem, maybe the log files in MySql are not properly configured (especially with InnoDB this could lead to a problem). Maybe there's a load-heavy query running too often? Table locks when multiple users write to/read from the same tables?
As I said, without having more details, it is quite difficult to guess. But I hope, at least I could point you in the right direction.
Run EXPLAIN on the SELECT.
Study this page carefully:
http://dev.mysql.com/doc/refman/5.0/en/using-explain.html
Understanding the concepts on that page are key to properly index your tables.
Thanks for the responses everyone.
Turns out it was a DNS issue (which was a regression). MySQL is really stupid in that the default is to use DNS lookups. They got really slow, which killed all the network flow between the app server and the db server. It was as simple as adding "skip-name-resolve" to our my.cnf.
Are the 'mysql server' and 'app server' on the same box and talking to the same DB instance?
Your question suggests not, so I'd look for a problem on the network - start by pinging the database server from each box and compare the results.
Once you've done that you'll need to be a little more specific about the problem - were the ping times the same, are you running the same query, etc...