MySQL config tuning for long running queries - mysql

I have a SELECT query that takes around 2 minutes to run. It is causing our app to hang on the new cloud DB we migrated it to. The new cloud DB has only 3.5 GB of memory and 1 vCPU.
On our old VM DB it takes only 0.6 seconds which has around 16GB of memory.
Sometimes the SELECT query causes 100% CPU usage usually. And it looks like other queries don't get executed when this long running query is running.
569 rows in set (1 min 52.23 sec)
Is there anything I can configure to tune the my.cnf to return better results and mainly to prevent app from hanging. These are the only settings I have right now.
open_files_limit = 102400
max_connections = 5000
innodb_flush_log_at_trx_commit = 0
innodb_thread_concurrency = 8
log_bin_trust_function_creators=1
innodb_buffer_pool_size=2800M
innodb_log_file_size=600M
innodb_rollback_on_timeout=ON
innodb_log_buffer_size=16M
Its a query that returns the number of friends. And some of them might have around 600 friends and getting that list is what causing the issue. We can't change the query at the moment since its hardcoded to the app. But looking at the query it seems optimzed.

Update: Had to rebuild indexes and that fixes the issue. After the dump was imported I ran the command mysqlcheck database_name -p --optimize then the issue was resolved.
The query is now performing well, CPU usage has decreased, memory is being consumed, and query caching is also working.

Related

MySQL server execute query took long time

Why MySQL server took time compare than MySQL inside wamp ?
Machine 1 installed MySQL 5.6.17(inside the wamp)
Machine 2 installed MySQL 5.7.24(separate server)
Both machines are same configuration and same OS.
I imported same DB dump file to Machine1 and Machine2.
Now I execute query (the query get data from 6 join tables) and return 400 rows.
Took time:
Machine 1 (5.6.17) inside wamp- Below 30 sec's
Machine 2 (5.7.24) - Morethan 230 sec's
Shall I use MySQL(wamp) instead of MySQL server?
I think MySQL server need to increase Innodb_bufferpool_size on my.ini which located from C;\Program Data (Default hidden folder)
Default Innodb_bufferpool_size is 8M
innodb_buffer_pool_size: This is a very important setting to look immediate after the installation using InnoDB. The InnoDB is the buffer pool where the data is indexed the cached, which has a very large possible size that will make sure and use the memory no the disk space for most of the read-write operations, generally the size of InnoDB values are 5-6GB for 8GB RAM.
Fix: Increase innodb_buffer_pool_size
innodb_buffer_pool_size=356M

Weird spikes in MySQL query times

I'm running a NodeJS with MySQL (InnoDB) for a game server (player info, savedata, stuff). Server is HTTP(S) based so nothing realtime.
I'm having these weird spikes as you can see from the graphs below (first graph is requests/sec and last graph is queries/sec)
On the response time graph you can see max response times with purple and avg response times with blue. Even with those 10-20k peaks avg stays at 50-100ms as do 95% of the requests.
I've been digging around and found that the slow queries are nothing special. Usually update query with savedata (blob of ~2kb) or player profile update which modifies like username or so. No joins or anything like that. We're talking about tables with less than 100k rows.
Server is running in Azure on Ubuntu 14.04 with MySQL 5.7 using 4 cores and 7GB of RAM.
MySQL settings:
innodb_buffer_pool_size=4G
innodb_log_file_size=1G
innodb_buffer_pool_instances=4
innodb_log_buffer_size=4M
query_cache_type=0
tmp_table_size=64M
max_heap_table_size=64M
sort_buffer_size=32M
wait_timeout=300
interactive_timeout=300
innodb_file_per_table=ON
Edit: It turned out that the problem was never MySQL performance but Node.js performance before the SQL queries. More info here: Node.js multer and body-parser sometimes extremely slow
check your swappiness (suppose to be 0 mysql machines maximizing ram usage):
> sysctl -A|grep swap
vm.swappiness = 0
with only 7G of RAM and 4G of just buffer pool, your machine will swap if swappiness is not zero.
could you post your swap graph and used memory. 4G buffer is "over the edge" for 7G ram. For 8G ram, I would give 3G as you have +1G on everything else mysql wise + 2G on OS.
Also you have 1G for transaction log file and I assume you have two log files. Do you have so many writes to have such large files? You can use this guide: https://www.percona.com/blog/2008/11/21/how-to-calculate-a-good-innodb-log-file-size/

MySQL - Slow query - wp_options table. Website unable to handle traffic

After spending several days researching, I have placed a website on a c1.medium instance, Amazon Linux, and the MySQL database on a db.m1.medium instance. The RDS is running MySQL version 5.6.13. I have allocated 100 GB for the DB instance and have set the provided IOPS at 1,000. The website is photo based, permits user uploads and at peak hours has 400+ visitors.
Once I enabled the slow query logging I found the issue appears to be with the wp_options table, which when looking into phpmyadmin I found contains information on the WordPress plug-ins and theme. Ex:
SET timestamp=1390186963;
SELECT option_name, option_value FROM wp_options WHERE autoload = 'yes';
Time: 140120 3:04:17
User#Host: xxxx Id: 744
Query_time: 49.248039 Lock_time: 0.000180 Rows_sent: 485 Rows_examined: 538
After experimenting with a few of the DB parameters I set the query_cache_type to 1 and the query_cache_size to 64MB. I was hoping that enabling the caching would stop the database from repeatedly calling the wp_options table, but that unfortunately doesn’t appear to be the case. Any suggestions? What would be the next steps to take to figure out the cause of this issue? When looking at the CloudWatch metrics the hardware appears to be sufficient, but maybe not?
Below are screenshots of the CloudWatch metrics for both instances.
EC2:
RDS:
Unless
Query_time: 49.248039 Lock_time: 0.000180 Rows_sent: 485 Rows_examined: 538
is a copy/paste error, something is very very wrong here. That's 50 seconds to select 485 rows out of 538! The only reason for this, that i can imagine, is you have some EXTREMELY long columns in option_value, which is a longtext. Try
SELECT option_name, length(option_value) FROM wp_options WHERE autoload = 'yes';
to check if something went wrong here, like (as you say it's a photo website) an image, or a video file, that would have to have a size of at least a a few dozen gigabytes to produce your effect, sneaking into the theme configuration.
If you can, mysqldump() your database, import the dump into a local database, and try the same select on your local copy. This might help in deciding if it's a problem with the data, or some artificial limit on your Amazon instance that's set too low.
It appears that there is a spike of incoming traffic large enough to create a significant number of server threads. These threads take up so much memory that the instance is pushed to using swap which slows everything to a crawl.
Use FCGI configuration for PHP. If you are using mod_php, every Apache thread loads with mod_php. Even if the thread is not serving a request that requires php processing.
Install APC if you have not already. This will cache php bytecode and speed up requests.
Install W3 total cache. Configure it to use memory caching with memcached. You will likely need to install memcached and a php memcached extension.
If the above isn't enough, set up varnish and/or pass your site through cloudfront or cloudflare.

Increasing the number of simultaneous request to mysql

Recently we changed app server of our rails website from mongrel to passenger [with REE and Rails 2.3.8]. The production setup has 6 machines pointing to a single mysql server and a memcache server. Before each machine had 5 mongrel instance. Now we have 45 passenger instance as the RAM in each machine is 16GB with 2, 4 core cpu. Once we deployed this passenger set up in production. the Website became so slow. and all the request starting to queue up. And eventually we had to roll back.
Now we suspect that the cause should be the increased load to the Mysql server. As before there where only 30 mysql connection and now we have 275 connection. The mysql server has the similar set up as our website machine. bUt all the configs were left to the defaul limit. The buffer_pool_size is only 8 mb though we have 16GB ram. and number of Concurrent threads is 8.
Will this increased simultaneous connection to mysql would have caused mysql to respond slowly than when we had only 30 connections? If so, how can we make mysql perform better with 275 simultaneous connection in place.
Any advice greatly appreciated.
UPDATE:
More information on the mysql server:
RAM : 16GB CPU: two processors each having 4 cores
Tables are innoDB. with only default innodb config values.
Thanks
An idle MySQL connection uses up a stack and a network buffer on the server. That is worth about 200 KB of memory and zero CPU.
In a database using InnoDB only, you should edit /etc/sysctl.conf to include vm.swappiness = 0 to delay swapping out processes as long as possible. You should then increase innodb_buffer_pool_size to about 80% of the systems memory assuming a dedicated database server machine. Make sure the box does not swap, that is, VSIZE should not exceed system RAM.
innodb_thread_concurrency can be set to 0 (unlimited) or 32 to 64, if you are a bit paranoid, assuming MySQL 5.5. The limit is lower in 5.1, and around 4-8 in MySQL 5.0. It is not recommended to use such outdated versions of MySQL in a machine with 8 or 16 cores, there are huge improvements wrt to concurrency in MySQL 5.5 with InnoDB 1.1.
The variable thread_concurrency has no meaning inside a current Linux. It is used to call pthread_setconcurrency() in Linux, which does nothing. It used to have a function in older Solaris/SunOS.
Without further information, the cause for your performance problems cannot be determined with any security, but the above general advice may help. More general advice geared at my limited experience with Ruby can be found in http://mysqldump.azundris.com/archives/72-Rubyisms.html That article is the summary of a consulting job I once did for an early version of a very popular Facebook application.
UPDATE:
According to http://pastebin.com/pT3r6A9q , you are running 5.0.45-community-log, which is awfully old and does not perform well under concurrent load. Use a current 5.5 build, it should perform way better than what you have there.
Also, fix the innodb_buffer_pool_size. You are going nowhere with only 8M of pool here.
While you are at it, innodb_file_per_table should be ON.
Do not switch on innodb_flush_log_at_trx_commit = 2 without understanding what that means, but it may help you temporarily, depending on your persistence requirements. It is not a permanent solution to your problems in any way, though.
If you have any substantial kind of writes going on, you need to review the innodb_log_file_size and innodb_log_buffer_size as well.
If that installation is earning money, you dearly need professional help. I am no longer doing this as a profession, but I can recommend people. Contact me outside of Stack Overflow if you want.
UPDATE:
According to your processlist, you have very many queries in state Sending data. MySQL is in this state when a query is being executed, that is, the main interior Join Loop/Query Execution loop is busy. SHOW ENGINE INNODB STATUS\G will show you something like
...
--------------
ROW OPERATIONS
--------------
3 queries inside InnoDB, 0 queries in queue
...
If that number is larger than say 4-8 (inside InnoDB), 5.0.x is going to have trouble. 5.5.x will perform a lot better here.
Regarding the my.cnf: See my previous comments on your InnoDB. See also my comments on thread_concurrency (without innodb_ prefix):
# On Linux, this does exactly nothing.
thread_concurrency = 8
You are missing all innodb configuration at all. Assuming that you ARE using innodb tables, you are not performing well, no matter what you do.
As far as I know, it's unlikely that merely maintaining/opening the connections would be the problem. Are you seeing this issue even when the site is idle?
I'd try http://www.quest.com/spotlight-on-mysql/ or similar to see if it's really your database that's the bottleneck here.
In the past, I've seen basic networking craziness lead to behaviour similar to what you describe - someone had set up the new machines with an incorrect submask.
Have you looked at any of the machine statistics on the database server? Memory/CPU/disk IO stats? Is the database server struggling?

How can I set the max number of MySQL processes or threads?

ps axuw| grep mysql indicates only MySQL process, but if I run htop I can see 10 rows each one of them with a separate PID. So I wonder if they are threads or processes that for some reason I cannot see using ps.
Would it make any sense to try to limit them to two on my development machine, where I don't need concurrent access of many clients.
BTW Running on Ubuntu 8.10
You can set the max number of threads in your my.ini like this:
max_connections=2
However you might also want to set this:
thread_cache_size=1
The thread cache controls how many it keeps open even when nothing is happening.
MySQL does use threads, ps can see them if you run ps -eLf.
That said, I wouldn't worry about it - dormant threads use almost no resources whatsoever, and if you constrain the server too much it's bound to come back and bite you on the backside sometime later when you've forgotten that you did it.
There is few configuration settings in /etc/mysql/my.cnf that would impact memory usage.
Following settings:
key_buffer = 8M
max_connections = 30
query_cache_size = 8M
query_cache_limit = 512K
thread_stack = 128K
should drastically reduce the memory usage of mysql.
read more here: http://opensourcehacker.com/2011/03/31/reducing-mysql-memory-usage-on-ubuntu-debian-linux/
I was seeking for MySQL config stuff, then I saw this question.... Nothing to do with MySQL, am I right ?
If the main objective is to see the result of a custom command, you can use "watch" with the following syntax (available on most linux systems) :
watch "ps axuw| grep mysql"
It will run the command each 2 seconds and display the output, it is a very very useful command.
-> See the doc/man to see how it's powerful ;)