Tuning MySQL RDS which has WriteIOPS unstabilities - mysql

We are having troubles tuning MySQL 5.6 on AWS, and are looking for pointers to solve some performance issues.
We used to have dedicated servers and we could configure them how we like.
Our application uses a lot of temporary tables and we had no performance issue on that regard.
Until we switched to AWS RDS instance.
Now, many slow queries show up in the logs and it slows down the whole application.
Previously we worked with MySQL 5.4 and now it's 5.6.
Looking through the docs, we discovered some changes regarding the temporary tables default format.
It is InnoDB by default, and we set it back to MYISAM like we are used to and constated improvements on that regard.
first aspect:
Also, our DB is quite large and we have hundreds of simultaneous access to our application, and some tables require real-time computation. Join and Unions are used in that cases.
When developing the application (with MySQL 5.4), we found that splitting the larger queries into 2 or more steps and using intermediate tables, the over-whole performance improved.
Explain showed file sort and temporary file so we could get rid of those with temporary tables.
Is splitting queries into temporary tables really a good idea?
Edit We know conditions about the implicit conversion of MEMORY temporary tables to disk.
Another thing we are not sure is what makes query using temporary files?
We know using correct indexes is one way to go (correct order usage in where clause, correct usage of order by etc…), but is there anything else we could do?
second aspect:
Regarding some settings we are using, we have some hundred Mb for max_heap_table_size and tmp_table_size so we hoped that our temporary tables could hold in memory.
We also found articles describing to look at ReadIOPS and WriteIOPS.
The reads are stable and low, but the writes are showing unstable and high numbers.
Here is a graph:
The values on the vertical axis are operations/sec.
How can we interpret those numbers?
One thing to know about our application, is that every user action is logged into one big logs table. But it should be once per page load.
third aspect:
How far can we go with those settings so temporary tables can be used in Memory?
For instance, we read some articles explaining they set few Gb of max_heap_table_size on a dedicated MySQL server with about 12Gb of ram. (sounded like 80% or so)
Is that really something we can try? (Are same settings applicable on RDS?)
Also, can we set innodb_buffer_pool_size at the same value as well?
Note I can't find there article where I found that info, but I might have confused some parameter names. I'll edit the question if I find the source again.
The server settings are very different from what we used to have (the new servers on AWS are not set by us) and many settings values have been increased, and some decreased.
We fear it's not a good thing…
Here are the some noticable changes:
innodb_buffer_pool_size (increased *6)
innodb_log_buffer_size (decreased /4)
innodb_additional_mem_pool_size (decreased /4)
sort_buffer_size (decreased /2)
myisam_sort_buffer_size (increased *1279) (we have only innoDB tables, do we need to touch that?)
read_buffer_size (increased *2)
join_buffer_size (decreased /4)
read_rnd_buffer_size (increased *4)
tmp_table_size + max_heap_table_size (increased *8)
Some changes look weird to us (like myisam_sort_buffer_size), we think that using the same settings would have been better in the first place.
Can we have some pointers on those variables? (Sorry, we can't provide the exact numbers)
Also, is there a good article we could read that sums up a good balance between all those parameters?
Because we are concerned about the temporary tables not fitting in memory, I made that query to see what percentage of queries are actually written to disk:
select
tmp_tables
,tmp_tables_disk
,round( (tmp_tables_disk / tmp_tables) * 100, 2 ) as to_disk_percent
from
(select variable_value as tmp_tables from information_schema.GLOBAL_STATUS where variable_name = 'Created_tmp_tables') as s1
,(select variable_value as tmp_tables_disk from information_schema.GLOBAL_STATUS where variable_name =
'Created_tmp_disk_tables') as s2
The result is 10~12% (depending on the instance). Is that high?
TL;DR
I tried to add as many details to the question as possible about the current situation, I hope it's not more confusing that anything...
The issues are about writes.
Is there a way to diagnose what causes so many writes? (maybe linked to temporary tables?)

MySQL 5.4 never existed. Perhaps MariaDB 5.4?
Temp tables, even if seemingly not hurting performance, are a clue that things could be improved.
Do you have TEXT columns that you don't need to fetch. (This can force an in-memory temp table to turn into on-disk.)
Do you have TEXT columns that could be a smaller VARCHAR? (Same reason.)
Do you understand that INDEX(a,b) may be better than INDEX(a), INDEX(b)? ("composite" index)
The buffer_pool is very important for performance. Did the value of innodb_buffer_pool_size change? How much data do/did you have? How much RAM do/did you have.
In general, "you cannot tune your way out of a performance problem".
Is this a Data Warehouse application? Building and maintaining Summary tables is often a huge performance boost for DW "reports".
MyISAM only has table locking. This leads to blocking, which leads to all sorts of performance problems, especially slowing down queries. I'm surprised that InnoDB was slower for you. MyISAM is a dead end. By the time you get to 8.0, there will be even stronger reasons to switch.
Let's see one of the slow queries that benefited from splitting up. There are several techniques for making complex queries run faster (at least in InnoDB).
The order of ANDed things in WHERE does not matter. It does matter in composite indexes.
Changing tmp_table_size rarely helps or hurts. It should, however, be kept under 1% of RAM -- this is to avoid running out of RAM. Swapping is the worst thing to happen.
Graphs - what are the Y-axis units? Where did the graphs come from? They look useless.
"Gb of max_heap_table_size on a dedicated MySQL server with about 12Gb of ram. (sounded like 80% or so)" -- Check that article again. I recommend 120M for that setting on that machine.
When exclusively using MyISAM on a 12GB machine, key_buffer_size should be about 2400M and innodb_buffer_pool_size should be 0.
When exclusively using InnoDB on a 12GB machine, key_buffer_size should be about 20M and innodb_buffer_pool_size should be 8G. Did you change those when you tested InnoDB?
innodb_additional_mem_pool_size -- Unused, and eventually removed.
myisam_sort_buffer_size -- 250M on a 12G machine. If using only InnoDB, the setting can be ignored since that buffer won't be used.
As for the *6 and /4 -- I need to see the actual sizes to judge anything.
"sums up a good balance between all those parameters" -- Two things:
http://mysql.rjweb.org/doc.php/memory
Leave the rest at their defaults.
For more tuning advice, plus Slowlog advice, see http://mysql.rjweb.org/doc.php/mysql_analysis

Related

MariaDB my.cnf settings medium->heavy traffic

Could someone help me with a problem I have!? It is involving MariaDB configuration.
I have a server that has E5-Xeon CPU, 96GB DDR3 RAM, SSD Storage Space (1.2TB).
Recently something weird is happening. Some pages load very slow, other go instant. The pages that load slow include SELECT or INSERT queries.
Most of the tables are using MyISAM, but i also have InnoDB.
My.cnf file is kinda the default one and I was wondering what settings should i use.
I am using MySQL version 10.0.23-MariaDB.
The site has around 15.000 members. But never more than 1500-2500 online at the same time.
Thank you for any help i get :)
There are far too many possibilities to answer your question without more info. But here are the 'most important' settings:
For 96GB RAM and a mixture of InnoDB and MyISAM:
innodb_buffer_pool_size = 32G
innodb_buffer_pool_instances = 16
key_buffer_size = 9G
The key_buffer does not need to be bigger than the sum all MyISAM indexes. Reference.
For more info, turn on the slow log, wait a while, then summarize using pt-query-digest or mysqldumpslow -s t to see the top couple of queries. Then focus on optimizing them in some way. Often it is simple as devising the optimal composite index.
What is Max_used_connections? If it is really 1500-2500, then you have one set of issues.
Do not set query_cache_size bigger than, say, 100M. That is a known performance killer.
If you tweaked any other 'variables', fess up.
For further critique of your settings, provide me with SHOW VARIABLES and SHOW GLOBAL STATUS.
MyISAM only has "table locking", which can slow things down; converting to InnoDB is likely to help. More discussion.

Mysql (Myisam) variable setting

I have a single processor dedicated server with 4GB RAM and a 400MB Mysql database (Myisam) who has big performance problems.
The database is used by an ecommerce.
I already tryied to tune it using the mysqltuner script, but without good results.
Because the variable settings have been modified several times, I would like to have a basic configuration to start from, thereafter try to tune it.
Try this tool, it always show good results for performance tuning.
https://tools.percona.com/wizard
For ecommerce, you need InnoDB. If you don't change, you will be burned badly when a crash occurs at just the wrong instant in a monetary transaction.
Make that change, then
key_buffer_size = 20M
innodb_buffer_pool_size = 1000M
read my blog on moving from MyISAM to InnoDB.
When you then find that things are not working fast enough, do
long_query_time = 1
turn on the slowlog
wait a day
run pt-query-digest to find the worst couple of queries
present them to us for critique. The solution could be as simple as adding an composite index. Or maybe reformulating a SELECT.
I have redirected you toward slow queries because you cannot "tune" your way out of bad schema, bad queries, etc.

MySQL Optimising Table Cache & tmp disk tables

I'm trying to optimise my MySQL database.
I've got around 90 tables most of which are hardly ever used.
Only 10 or so do the vast bulk of the work running my website.
MySQL status statistics show approx 2M queries over 2.5 days and reports "Opened_tables" of 1.7k (with Open_tables 256). I have the table_cache set at 256, increased from 32.
I presume most of the opened tables are either multiple instances of the same tables from different connections or some temporary tables.
In the same period it reports "Created_tmp_tables" of 19.1 k and more annoyingly Created_tmp_disk_tables of 5.7k. I have max_heap_table_size and tmp_table_size both set at 128M.
I've tried to optimise my indexes & joins as best i can, and i've tried to avoid BLOB and TEXT fields in the tables to avoid disk usage.
Is there anything you can suggest to improve things?
First of all, don't conclude your MySQL database is performing poorly based on these internal statistics. There's nothing wrong with tmp tables. In fact, queries involving ordering or summaries require their creation.
It's like trying to repair your vehicle after analyzing the amount of time it spent in second gear. Substantially less than 1% of your queries are generating tmp tables. That is good. That number is low enough that these queries might be for backups or some kind of maintenance operation, rather than production.
If you are having performance problems, you will know that because certain queries are working too slowly, and certain pages on your web app are slow. Can you figure out which queries have problems? There's a slow query log that might help you.
http://dev.mysql.com/doc/refman/5.1/en/slow-query-log.html
You might try increasing tmp_table_size if you have plenty of RAM. Why not take it up to a couple of megabytes and see if things get better? But, they probably won't change noticeably.

Is tuning the innodb_buffer_pool_size important on Solaris ZFS?

We're running a moderate size (350GB) database with some fairly large tables (a few hundred million rows, 50GB) on a reasonably large server (2 x quad-core Xeons, 24GB RAM, 2.5" 10k disks in RAID10), and are getting some pretty slow inserts (e.g. simple insert of a single row taking 90 seconds!).
Our innodb_buffer_pool_size is set to 400MB, which would normally be way too low for this kind of setup. However, our hosting provider advises that this is irrelevant when running on ZFS. Is he right?
(Apologies for the double post on https://dba.stackexchange.com/questions/1975/is-tuning-the-innodb-buffer-pool-size-important-on-solaris-zfs, but I'm not sure how big the audience is over there!)
Your hosting provider is incorrect. There are various things you should tune differently when running MySQL on ZFS, but reducing the innodb_buffer_pool_size is not one of them. I wrote an article on the subject of running MySQL on ZFS and gave a lecture on it a while back. Specifically regarding innodb_buffer_pool_size, what you should do is set it to whatever would be reasonable on any other file system, and because O_DIRECT doesn't mean "don't cache" on ZFS, you should set primarycache=metadata on your ZFS file system containing your datadir. There are other optimisations to be made, which you can find in the article and the lecture slides.
I would still set the innodb_buffer_pool_size much higher that 400M. The reason? InnoDB Buffer Pool will still cache the data and index pages you need for tables accessed frequently.
Run this query to get the recommended innodb_buffer_pool_size in MB:
SELECT CONCAT(ROUND(KBS/POWER(1024,IF(pw<0,0,IF(pw>3,0,pw)))+0.49999),SUBSTR(' KMG',IF(pw<0,0,IF(pw>3,0,pw))+1,1)) recommended_innodb_buffer_pool_size FROM (SELECT SUM(data_length+index_length) KBS FROM information_schema.tables WHERE engine='InnoDB') A,(SELECT 2 pw) B;
Simply use either the result of this query or 80% of installed RAM (in your case 19660M) whichever is smaller.
I would also set the innodb_log_file_size to 25% of the InnoDB Buffer Pool size. Unfortunately, the maximum value of innodb_log_file_size is 2047M. (1M short of 2G) Thus, set innodb_log_file_size to 2047M since 25% of innodb_buffer_pool_size of my recommendated setting is 4915M.
Yet another recommedation is to disable ACID compliance. Use either 0 or 2 for innodb_flush_log_at_trx_commit (default is 1 which support ACID compliance) This will produce faster InnoDB writes AT THE RISK of losing up to 1 second's worth of transactions in the event of a crash.
May be worth reading slow-mysql-inserts if you haven't already. Also this link to the mysql docs on the matter - especially with regards to considering a transaction if you are doing multiple inserts to a large table.
More relevant is this mysql article on performance of innodb and zfs which specifically considers the buffer pool size.
The headline conclusion is;
With InnoDB, the ZFS performance curve suggests a new strategy of "set the buffer pool size low, and let ZFS handle the data buffering."
You may wish to add some more detail such as the number / complexity of the indexes on the table - this can obviously make a big difference.
Apologies for this being rather generic advice rather than from personal experience, I haven't run zfs in anger but hope some of those links might be of use.

MySQL maximum memory usage

I would like to know how it is possible to set an upper limit on the amount of memory MySQL uses on a Linux server.
Right now, MySQL will keep taking up memory with every new query requested so that it eventually runs out of memory. Is there a way to place a limit so that no more than that amount is used by MySQL?
MySQL's maximum memory usage very much depends on hardware, your settings and the database itself.
Hardware
The hardware is the obvious part. The more RAM the merrier, faster disks ftw. Don't believe those monthly or weekly news letters though. MySQL doesn't scale linear - not even on Oracle hardware. It's a little trickier than that.
The bottom line is: there is no general rule of thumb for what is recommend for your MySQL setup. It all depends on the current usage or the projections.
Settings & database
MySQL offers countless variables and switches to optimize its behavior. If you run into issues, you really need to sit down and read the (f'ing) manual.
As for the database -- a few important constraints:
table engine (InnoDB, MyISAM, ...)
size
indices
usage
Most MySQL tips on stackoverflow will tell you about 5-8 so called important settings. First off, not all of them matter - e.g. allocating a lot of resources to InnoDB and not using InnoDB doesn't make a lot of sense because those resources are wasted.
Or - a lot of people suggest to up the max_connection variable -- well, little do they know it also implies that MySQL will allocate more resources to cater those max_connections -- if ever needed. The more obvious solution might be to close the database connection in your DBAL or to lower the wait_timeout to free those threads.
If you catch my drift -- there's really a lot, lot to read up on and learn.
Engines
Table engines are a pretty important decision, many people forget about those early on and then suddenly find themselves fighting with a 30 GB sized MyISAM table which locks up and blocks their entire application.
I don't mean to say MyISAM sucks, but InnoDB can be tweaked to respond almost or nearly as fast as MyISAM and offers such thing as row-locking on UPDATE whereas MyISAM locks the entire table when it is written to.
If you're at liberty to run MySQL on your own infrastructure, you might also want to check out the percona server because among including a lot of contributions from companies like Facebook and Google (they know fast), it also includes Percona's own drop-in replacement for InnoDB, called XtraDB.
See my gist for percona-server (and -client) setup (on Ubuntu): http://gist.github.com/637669
Size
Database size is very, very important -- believe it or not, most people on the Intarwebs have never handled a large and write intense MySQL setup but those do really exist. Some people will troll and say something like, "Use PostgreSQL!!!111", but let's ignore them for now.
The bottom line is: judging from the size, decision about the hardware are to be made. You can't really make a 80 GB database run fast on 1
GB of RAM.
Indices
It's not: the more, the merrier. Only indices needed are to be set and usage has to be checked with EXPLAIN. Add to that that MySQL's EXPLAIN is really limited, but it's a start.
Suggested configurations
About these my-large.cnf and my-medium.cnf files -- I don't even know who those were written for. Roll your own.
Tuning primer
A great start is the tuning primer. It's a bash script (hint: you'll need linux) which takes the output of SHOW VARIABLES and SHOW STATUS and wraps it into hopefully useful recommendation. If your server has ran some time, the recommendation will be better since there will be data to base them on.
The tuning primer is not a magic sauce though. You should still read up on all the variables it suggests to change.
Reading
I really like to recommend the mysqlperformanceblog. It's a great resource for all kinds of MySQL-related tips. And it's not just MySQL, they also know a lot about the right hardware or recommend setups for AWS, etc.. These guys have years and years of experience.
Another great resource is planet-mysql, of course.
We use these settings:
etc/my.cnf
innodb_buffer_pool_size = 384M
key_buffer = 256M
query_cache_size = 1M
query_cache_limit = 128M
thread_cache_size = 8
max_connections = 400
innodb_lock_wait_timeout = 100
for a server with the following specifications:
Dell Server
CPU cores: Two
Processor(s): 1x Dual Xeon
Clock Speed: >= 2.33GHz
RAM: 2 GBytes
Disks: 1×250 GB SATA
mysqld.exe was using 480 mb in RAM. I found that I added this parameter to my.ini
table_definition_cache = 400
that reduced memory usage from 400,000+ kb down to 105,000kb
Database memory usage is a complex topic. The MySQL Performance Blog does a good job of covering your question, and lists many reasons why it's hugely impractical to "reserve" memory.
If you really want to impose a hard limit, you could do so, but you'd have to do it at the OS level as there is no built-in setting. In linux, you could utilize ulimit, but you'd likely have to modify the way MySQL starts in order to impose this.
The best solution is to tune your server down, so that a combination of the usual MySQL memory settings will result in generally lower memory usage by your MySQL installation. This will of course have a negative impact on the performance of your database, but some of the settings you can tweak in my.ini are:
key_buffer_size
query_cache_size
query_cache_limit
table_cache
max_connections
tmp_table_size
innodb_buffer_pool_size
I'd start there and see if you can get the results you want. There are many articles out there about adjusting MySQL memory settings.
Edit:
Note that some variable names have changed in the newer 5.1.x releases of MySQL.
For example:
table_cache
Is now:
table_open_cache
About how MYSQL is eating memory: https://dev.mysql.com/doc/refman/8.0/en/memory-use.html
in /etc/my.cnf:
[mysqld]
...
performance_schema = 0
table_cache = 0
table_definition_cache = 0
max_connect_errors = 10
query_cache_size = 0
query_cache_limit = 0
...
Good work on server with 256MB Memory.
If you are looking for optimizing your docker mysql container then the below command may help. I was able to run mysql docker container from a default 480mb to mere 100 mbs
docker run -d -p 3306:3306 -e MYSQL_DATABASE=test -e MYSQL_ROOT_PASSWORD=tooor -e MYSQL_USER=test -e MYSQL_PASSWORD=test -v /mysql:/var/lib/mysql --name mysqldb mysql --table_definition_cache=100 --performance_schema=0 --default-authentication-plugin=mysql_native_password
Since I do not have enough reputation points to upvote a previous answer, I concur that the "table_definition_cache = 400" answer worked on my old Centos server.