Why increasing innodb_buffer_pool_size slow down selects? - mysql

I have a server with 32G, a database of 20G. I run MySQL with innodb_buffer_pool_size of 10G, in trying to improve the performance increasing the value to 20G just slows down the select-queries. Can anybody explain me why that happens?

Try EXPLAIN to get the general idea why the queries are slow.
Try SHOW ENGINE INNODB STATUS to see how buffer pool is used and what your InnoDB engine is doing.
There may be a lot of other factors in play here. Increasing buffer_pool_size you may be getting out of physical RAM (in accord with other configuration options), and then it goes into virtual memory, which is on disk... and disk operations are always slow...

Related

Tuning MySQL RDS which has WriteIOPS unstabilities

We are having troubles tuning MySQL 5.6 on AWS, and are looking for pointers to solve some performance issues.
We used to have dedicated servers and we could configure them how we like.
Our application uses a lot of temporary tables and we had no performance issue on that regard.
Until we switched to AWS RDS instance.
Now, many slow queries show up in the logs and it slows down the whole application.
Previously we worked with MySQL 5.4 and now it's 5.6.
Looking through the docs, we discovered some changes regarding the temporary tables default format.
It is InnoDB by default, and we set it back to MYISAM like we are used to and constated improvements on that regard.
first aspect:
Also, our DB is quite large and we have hundreds of simultaneous access to our application, and some tables require real-time computation. Join and Unions are used in that cases.
When developing the application (with MySQL 5.4), we found that splitting the larger queries into 2 or more steps and using intermediate tables, the over-whole performance improved.
Explain showed file sort and temporary file so we could get rid of those with temporary tables.
Is splitting queries into temporary tables really a good idea?
Edit We know conditions about the implicit conversion of MEMORY temporary tables to disk.
Another thing we are not sure is what makes query using temporary files?
We know using correct indexes is one way to go (correct order usage in where clause, correct usage of order by etc…), but is there anything else we could do?
second aspect:
Regarding some settings we are using, we have some hundred Mb for max_heap_table_size and tmp_table_size so we hoped that our temporary tables could hold in memory.
We also found articles describing to look at ReadIOPS and WriteIOPS.
The reads are stable and low, but the writes are showing unstable and high numbers.
Here is a graph:
The values on the vertical axis are operations/sec.
How can we interpret those numbers?
One thing to know about our application, is that every user action is logged into one big logs table. But it should be once per page load.
third aspect:
How far can we go with those settings so temporary tables can be used in Memory?
For instance, we read some articles explaining they set few Gb of max_heap_table_size on a dedicated MySQL server with about 12Gb of ram. (sounded like 80% or so)
Is that really something we can try? (Are same settings applicable on RDS?)
Also, can we set innodb_buffer_pool_size at the same value as well?
Note I can't find there article where I found that info, but I might have confused some parameter names. I'll edit the question if I find the source again.
The server settings are very different from what we used to have (the new servers on AWS are not set by us) and many settings values have been increased, and some decreased.
We fear it's not a good thing…
Here are the some noticable changes:
innodb_buffer_pool_size (increased *6)
innodb_log_buffer_size (decreased /4)
innodb_additional_mem_pool_size (decreased /4)
sort_buffer_size (decreased /2)
myisam_sort_buffer_size (increased *1279) (we have only innoDB tables, do we need to touch that?)
read_buffer_size (increased *2)
join_buffer_size (decreased /4)
read_rnd_buffer_size (increased *4)
tmp_table_size + max_heap_table_size (increased *8)
Some changes look weird to us (like myisam_sort_buffer_size), we think that using the same settings would have been better in the first place.
Can we have some pointers on those variables? (Sorry, we can't provide the exact numbers)
Also, is there a good article we could read that sums up a good balance between all those parameters?
Because we are concerned about the temporary tables not fitting in memory, I made that query to see what percentage of queries are actually written to disk:
select
tmp_tables
,tmp_tables_disk
,round( (tmp_tables_disk / tmp_tables) * 100, 2 ) as to_disk_percent
from
(select variable_value as tmp_tables from information_schema.GLOBAL_STATUS where variable_name = 'Created_tmp_tables') as s1
,(select variable_value as tmp_tables_disk from information_schema.GLOBAL_STATUS where variable_name =
'Created_tmp_disk_tables') as s2
The result is 10~12% (depending on the instance). Is that high?
TL;DR
I tried to add as many details to the question as possible about the current situation, I hope it's not more confusing that anything...
The issues are about writes.
Is there a way to diagnose what causes so many writes? (maybe linked to temporary tables?)
MySQL 5.4 never existed. Perhaps MariaDB 5.4?
Temp tables, even if seemingly not hurting performance, are a clue that things could be improved.
Do you have TEXT columns that you don't need to fetch. (This can force an in-memory temp table to turn into on-disk.)
Do you have TEXT columns that could be a smaller VARCHAR? (Same reason.)
Do you understand that INDEX(a,b) may be better than INDEX(a), INDEX(b)? ("composite" index)
The buffer_pool is very important for performance. Did the value of innodb_buffer_pool_size change? How much data do/did you have? How much RAM do/did you have.
In general, "you cannot tune your way out of a performance problem".
Is this a Data Warehouse application? Building and maintaining Summary tables is often a huge performance boost for DW "reports".
MyISAM only has table locking. This leads to blocking, which leads to all sorts of performance problems, especially slowing down queries. I'm surprised that InnoDB was slower for you. MyISAM is a dead end. By the time you get to 8.0, there will be even stronger reasons to switch.
Let's see one of the slow queries that benefited from splitting up. There are several techniques for making complex queries run faster (at least in InnoDB).
The order of ANDed things in WHERE does not matter. It does matter in composite indexes.
Changing tmp_table_size rarely helps or hurts. It should, however, be kept under 1% of RAM -- this is to avoid running out of RAM. Swapping is the worst thing to happen.
Graphs - what are the Y-axis units? Where did the graphs come from? They look useless.
"Gb of max_heap_table_size on a dedicated MySQL server with about 12Gb of ram. (sounded like 80% or so)" -- Check that article again. I recommend 120M for that setting on that machine.
When exclusively using MyISAM on a 12GB machine, key_buffer_size should be about 2400M and innodb_buffer_pool_size should be 0.
When exclusively using InnoDB on a 12GB machine, key_buffer_size should be about 20M and innodb_buffer_pool_size should be 8G. Did you change those when you tested InnoDB?
innodb_additional_mem_pool_size -- Unused, and eventually removed.
myisam_sort_buffer_size -- 250M on a 12G machine. If using only InnoDB, the setting can be ignored since that buffer won't be used.
As for the *6 and /4 -- I need to see the actual sizes to judge anything.
"sums up a good balance between all those parameters" -- Two things:
http://mysql.rjweb.org/doc.php/memory
Leave the rest at their defaults.
For more tuning advice, plus Slowlog advice, see http://mysql.rjweb.org/doc.php/mysql_analysis

Mysql (InnoDB) config settings for better performance?

Recently I updated my VPS from 1GB to 4GB memory. I'd hoped that the queries (MYSQL/InnoDB) were running faster with more memory, but unfortunately that's not the case. Does mysql automatically takes more memory when a server has more memory or do I have to change some settings in my.cnf? And if so, what changes should I make?
MySQL will not automatically take the benefit of more memory installed.
In your case (given that you are using InnoDB) you can do at least these to improve the performance of mysql:
increase innodb_buffer_pool_size (default value for this option is 128MB). This defines how much memory is dedicated to mysql innodb to cache its data tables and idexes. Which means if you can allocate more memory mysql will cache more of its data resulting in faster queries (because mysql will look in memory instead of doing I/O operations for data lookup).
Of course you should allocate reasonable amount of memory (not the whole 4G :)) may be not more than 2G. You should try and test it on the server for more accurate result. (read this for more info, before you change this option https://dev.mysql.com/doc/refman/5.7/en/innodb-buffer-pool-resize.html)
increase innodb_buffer_pool_instances. For you case may be 1 or 2 instances are more than enough. (you can read more here: https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_buffer_pool_instances)
But before starting with editing of my.ini do your calculations for your case. Consider your mysql server load, slow queries etc. for more accurate setup of the options in my.ini

MariaDB my.cnf settings medium->heavy traffic

Could someone help me with a problem I have!? It is involving MariaDB configuration.
I have a server that has E5-Xeon CPU, 96GB DDR3 RAM, SSD Storage Space (1.2TB).
Recently something weird is happening. Some pages load very slow, other go instant. The pages that load slow include SELECT or INSERT queries.
Most of the tables are using MyISAM, but i also have InnoDB.
My.cnf file is kinda the default one and I was wondering what settings should i use.
I am using MySQL version 10.0.23-MariaDB.
The site has around 15.000 members. But never more than 1500-2500 online at the same time.
Thank you for any help i get :)
There are far too many possibilities to answer your question without more info. But here are the 'most important' settings:
For 96GB RAM and a mixture of InnoDB and MyISAM:
innodb_buffer_pool_size = 32G
innodb_buffer_pool_instances = 16
key_buffer_size = 9G
The key_buffer does not need to be bigger than the sum all MyISAM indexes. Reference.
For more info, turn on the slow log, wait a while, then summarize using pt-query-digest or mysqldumpslow -s t to see the top couple of queries. Then focus on optimizing them in some way. Often it is simple as devising the optimal composite index.
What is Max_used_connections? If it is really 1500-2500, then you have one set of issues.
Do not set query_cache_size bigger than, say, 100M. That is a known performance killer.
If you tweaked any other 'variables', fess up.
For further critique of your settings, provide me with SHOW VARIABLES and SHOW GLOBAL STATUS.
MyISAM only has "table locking", which can slow things down; converting to InnoDB is likely to help. More discussion.

Is tuning the innodb_buffer_pool_size important on Solaris ZFS?

We're running a moderate size (350GB) database with some fairly large tables (a few hundred million rows, 50GB) on a reasonably large server (2 x quad-core Xeons, 24GB RAM, 2.5" 10k disks in RAID10), and are getting some pretty slow inserts (e.g. simple insert of a single row taking 90 seconds!).
Our innodb_buffer_pool_size is set to 400MB, which would normally be way too low for this kind of setup. However, our hosting provider advises that this is irrelevant when running on ZFS. Is he right?
(Apologies for the double post on https://dba.stackexchange.com/questions/1975/is-tuning-the-innodb-buffer-pool-size-important-on-solaris-zfs, but I'm not sure how big the audience is over there!)
Your hosting provider is incorrect. There are various things you should tune differently when running MySQL on ZFS, but reducing the innodb_buffer_pool_size is not one of them. I wrote an article on the subject of running MySQL on ZFS and gave a lecture on it a while back. Specifically regarding innodb_buffer_pool_size, what you should do is set it to whatever would be reasonable on any other file system, and because O_DIRECT doesn't mean "don't cache" on ZFS, you should set primarycache=metadata on your ZFS file system containing your datadir. There are other optimisations to be made, which you can find in the article and the lecture slides.
I would still set the innodb_buffer_pool_size much higher that 400M. The reason? InnoDB Buffer Pool will still cache the data and index pages you need for tables accessed frequently.
Run this query to get the recommended innodb_buffer_pool_size in MB:
SELECT CONCAT(ROUND(KBS/POWER(1024,IF(pw<0,0,IF(pw>3,0,pw)))+0.49999),SUBSTR(' KMG',IF(pw<0,0,IF(pw>3,0,pw))+1,1)) recommended_innodb_buffer_pool_size FROM (SELECT SUM(data_length+index_length) KBS FROM information_schema.tables WHERE engine='InnoDB') A,(SELECT 2 pw) B;
Simply use either the result of this query or 80% of installed RAM (in your case 19660M) whichever is smaller.
I would also set the innodb_log_file_size to 25% of the InnoDB Buffer Pool size. Unfortunately, the maximum value of innodb_log_file_size is 2047M. (1M short of 2G) Thus, set innodb_log_file_size to 2047M since 25% of innodb_buffer_pool_size of my recommendated setting is 4915M.
Yet another recommedation is to disable ACID compliance. Use either 0 or 2 for innodb_flush_log_at_trx_commit (default is 1 which support ACID compliance) This will produce faster InnoDB writes AT THE RISK of losing up to 1 second's worth of transactions in the event of a crash.
May be worth reading slow-mysql-inserts if you haven't already. Also this link to the mysql docs on the matter - especially with regards to considering a transaction if you are doing multiple inserts to a large table.
More relevant is this mysql article on performance of innodb and zfs which specifically considers the buffer pool size.
The headline conclusion is;
With InnoDB, the ZFS performance curve suggests a new strategy of "set the buffer pool size low, and let ZFS handle the data buffering."
You may wish to add some more detail such as the number / complexity of the indexes on the table - this can obviously make a big difference.
Apologies for this being rather generic advice rather than from personal experience, I haven't run zfs in anger but hope some of those links might be of use.

MySQL MyISAM & innoDB Memory Usage

Does anyone know how much memory MyISAM and innoDB use? How does their memory usages compare when dealing with small tables vs. when dealing with bigger tables (up to 32 GB)?
I know innoDB is heavier than MyISAM, but just how much more?
Any help would be appreciated.
Thanks,
jb
You can't compare them like that. Or at least, you shouldn't. Each one uses the memory in a different way. This is especially true if you're tunning your DB's for performance.
MyISAM has specific buffers for indexes and it uses the OS disk buffer for caching other data. It doesn't make sense to have your buffers larger than the sum of your indexes, but the more memory you give it, the faster it will be.
InnoDB has a buffer pool for all data. You configure this based on your available memory and how much you want to give it. InnoDB buffers as much of your data in memory as possible. If you can fit the entire DB in memory, InnoDB will never read from disk. A lot of InnoDB databases see huge performance hits when the data size becomes larger than the buffer pool.
MySQL is very configurable. It's tunable to meet your needs. Typically, databases should be given as much memory as possible since they are almost always disk bound. More memory means more can be buffered.