mysql tuning variables - current & defaults - mysql

I have a pretty vanilla mysql 5.1 setup, and I am trying to tune it. I found this handy script
It made the following suggestions:
query_cache_limit (> 1M, or use smaller result sets)
query_cache_size (> 16M)
join_buffer_size (> 128.0K, or always use indexes with joins)
table_cache (> 64)
innodb_buffer_pool_size (>= 14G)
In reading up on what these mean and what they are currently set to, I found that I can run "mysqladmin variables"
My current values are:
query_cache_limit | 1048576
query_cache_size | 16777216
join_buffer_size | 131072
innodb_buffer_pool_size | 8388608
How do I read these, are they Kbytes? so is that 1M, 16M, 13M and 8M?
My box is only 4G of Ram and on a normal day only had a few hundred megs free of memory. Should I follow these suggestions and do:
#innodb_buffer_pool_size = 15G
#table_cache = 128
#join_buffer_size = 32M
#query_cache_size = 64M
#query_cache_limit = 2M
Im confused by the 15G, is this a disk space thing, not a memory thing? If so then the recommendations are not very good right?
Should I get more memory for my box?
More Info:
- My db size is 34Gigs, I use all innodb, I have 71 tables, 4 of them are huge, the rest are small. Ive been thinking of moving the big ones to SOLR and doing all queries from there, but wanted to see what I can do with basic tuning.
thanks
Joel

You should not set your innodb buffer pool higher than your available memory. The script probably recommended that based on the number of records in your table and their physical size. Innodb performance is very much memory based, if it can fit the indexes in memory, performance is going to drop quickly and noticeably. So setting innodb_buffer_pool_size high is almost always good advice.
Innodb is not the best table type for everything when it comes to mysql. Very large tables that generally have a lot of inserts, but few reads and updates (i.e. logging) are better off as MyISAM tables. Your very active tables (inserts, updates, deletes, selects) are better off Innodb. There may be a flame war on this advice, and it is generic advice.
But that said, no script is going to being to tell you what your settings should be. It can only make a best guest. The best settings are based on your data access patterns. You really have to read up on what all the variables are. mysqlperformanceblog.com is an excellent place for learning about mysql, in addition to the manual.
When in mysql, use "show variables" and "show status" to see what's going on. You can also run "show innodb status", but you may not understand that output if you don't know what the variables are.

Related

Problem improving MySQL. (innodb_log_file_size)

we are trying to improve the efficiency of our database server.
One of the recommendations of MySQLTunner is rising the innodb_log_file_size to 12 GB. As we see this change could improve significantly the speed and the performance of our queries. The problem came when we increase this parameter more than 1 GB, the service wont start, we delete the logs, stop it cleanly, and still cant start with this parameter above 1 GB.
Someinfo:
mysql Ver 14.14 Distrib 5.5.62, for debian-linux-gnu
innodb_buffer_pool_size = 100G
innodb_file_per_table = ON
innodb_buffer_pool_instances = 64
innodb_stats_on_metadata = OFF
innodb_log_file_size = 1G
innodb_log_buffer_size = 8M
Theres algo enough space in partitions for keeping this log size
Thanks!
In MySQL 5.5, you can't increase the innodb log file size over 4GB total. The innodb_log_file_size can only be 4GB / innodb_log_files_in_group (which is 2 by default, and there's no benefit to change that). So you can set the log file to a max of 2GB.
See https://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_log_file_size
The max combined size of the log files increased to 512GB in 5.6.3. Again, innodb_log_file_size should be the size of one log file, so if you use multiple log files, the total cannot exceed 512GB.
I agree with Rick James' answer that increasing the log file size is not a magical solution to make queries run faster. It won't do that.
It's sometimes useful to increase the innodb log file size, if the bottleneck is that you run out of log space faster than dirty pages can be flushed to the tablespace, because you have very high write traffic. That's affected by the rate of writes, not the speed of individual writes.
For most apps, two 2GB log files is more than enough. If it isn't, it's probably time to run multiple MySQL instances, and distribute your write traffic over them as evenly as you can.
It's tricky to change the log_file_size in that old version. For the steps, see https://dba.stackexchange.com/questions/1261/how-to-safely-change-mysql-innodb-variable-innodb-log-file-size/4103#4103
However, I predict that changing the log_file_size will not help much. "Performance" usually implies a few slow queries. Do you have the slowlog turned on? with a low value of long_query_time? Find the few worst queries; we can probably improve performance by tackling them. Steps on that: http://mysql.rjweb.org/doc.php/mysql_analysis
If you change the innodb_log_file_size parameter, you need to remove the old log files. Otherwise, Innodb won't start successfully if the existing files do not match the specified size in the config file.
On the other hand, The innodb_buffer_pool_size should be set to about 70% of available RAM if you are running InnoDB only.

InnoDB table diskspace consuming all HDD space

Background
I have a MariaDB server installed, some of the tables use MyISAM and some of them use InnoDB. InnoDB is good for reducing query time because it is multi-core. I changed some of our huge tables into InnoDB.
Then I found my HDD is using more and more space. I have checked my CentOS 7 Linux and I found that ibdata1 is consuming my HDD space. And I know if I need to shrink the space I need to fully dump my MySQL server into a .sql file then drop all databases. After that, stop the MySQL server and delete the ibdata1 file. Moreover, set innodb_file_per_table into my.cnf. Finally, import the sql back into the server.
Everything going well until I found this issue.
Issue
I real-time checking my new HDD usage and I realised the table is now using a .ibd file with the name as same as the table name. And it is HUGE! After finishing the import, the HDD usage is even worse than before. I have tried to OPTIMIZE TABLE for a 750MB file to see if it can shrink the size but no luck. I also have a 14.8GB InnoDB table but I don't have another 14.8GB for MySQL to optimize my table and I don't think it can reduce the usage.
Attachment
Current my.cnf
[mysqld]
local-infile = 0
max_connections = 32768
long_query_time = 5
query_cache_type = ON
query_cache_size = 200M
tmp_table_size = 2M
max_heap_table_size = 64M
myisam_sort_buffer_size = 64M
table_open_cache = 4096
thread_concurrency = 28
sort_buffer_size = 16M
read_buffer_size = 16M
join_buffer_size = 16M
innodb_file_per_table
innodb_flush_method = O_DIRECT
innodb_log_file_size = 1G
innodb_buffer_pool_size = 4G
innodb_read_io_threads = 7
innodb_write_io_threads = 7
What can I do now?
Short answer: The disk space used by an InnoDB table (and indexes) is roughly 2x-3x what it would take with MyISAM. This is something to live with.
Long answer:
If you did not have a bunch of spare disk space to start with, your conversion to InnoDB will eventually run out of space, regardless of file_per_table, etc.
innodb_file_per_table = OFF: All data and indexes for all subsequently CREATEd or ALTERed tables goes into the file ibdata1. That file only grows; it cannot shrink.
innodb_file_per_table = ON: All data and indexes for all subsequently CREATEd or ALTERed tables goes into .ibd files -- each with the name of the table. Generally, this is the better approach because it allows for better maintenance in the long run.
Either way, a similar amount of disk space will be taken.
Other issues:
query_cache_size = 200M hurts performance; do not go above about 50M.
Both InnoDB and MyISAM are capable of using multiple CPUs -- but only one CPU per connection. On the other hand, MyISAM does "table locking", so there is less concurrency. (This may have confused you into thinking it was a CPU issue.)
Some ALTERs and all OPTIMIZEs copy the table over. So, during the operation, you need enough disk space for an extra copy of the table. When using ibdata1, this will expand, but not contract, the size of that file. With .ibd, the space is given back to the OS.
ALTER and OPTIMIZE may or may not shrink the size of the table and index(es) (and increase Data_free). OPTIMIZE is almost never useful for InnoDB.
Other tips on converting to InnoDB .
I tend to like putting 'tiny' tables into ibdata1 instead of file_per_table. But it is a hassle--I have to think-ahead.

what is the best query_cache_size / Ram ratio

hi I want to change my /etc/my.cnf file (mysql's config file).
What should the below values for better performance on my queries.
query_cache_type = 1
query_cache_limit = 1M
query_cache_size = 16M
Is there an optimal ratio for cache_size / RAM ? I have 8GB of ram on my ubuntu machine.
If there were a well-defined optimum there would be no need for a configuration option. MySQL would use that optimum by default. The query cache is also only useful for very specific circumstances (you read a lot more from the table than you write to it) because the cache is emptied on a per-table basis every time you write anything to the table. It also only works if you state the exact same queries, with the same parameters, over and over.
The optimal value for you needs to be measured out and depends a lot on your use case. If you have a lot of InnoDB tables you will get much more use out of the InnoDB buffer pool: innodb_buffer_pool_size. Set this variable as high as possible (and on a MySQL-only, InnoDB-only machine this might mean as much as 80% of your available RAM).
We host hundreds of small websites on our 8GB RAM server, which runs both database and web server on the same machien, with a mixture of MyISAM and InnoDB tables. Here is our configuration for comparison:
innodb_file_per_table=1
open_files_limit=50000
max_allowed_packet=268435456
innodb_buffer_pool_size=1G
innodb_log_file_size=256M
innodb_flush_method=O_DIRECT
innodb_io_capacity=1000
innodb_old_blocks_time=1000
innodb_open_files=5000
key_buffer_size=16M
read_buffer_size=256K
read_rnd_buffer_size=256K
query_cache_size=256M
query_cache_limit=5M
join_buffer_size=4M
sort_buffer_size=4M
max_heap_table_size=64M
tmp_table_size=64M
table_open_cache=4500
table_definition_cache=4000
thread_cache_size=50
If your machine has a lot of writes, turn the Query cache completely off (type=0, size=0). That is because every write to a table causes all entries in the QC for that table to be removed.
As a corollary to that, having too big a QC can be "slow". I recommend no more than 50M for query_cache_size.
I hope that explains why I did not address your title question about percent of RAM.
This depends on the size of the query results. If you have a query cache limit of 5M and query_cache_size of 256M a worst case scenario will let you end up with 55 query results of 5M in your cache.
Depending on the type of queries you run most you are better of setting a smaller query_cache_limit (64k) giving you a total of 4096 smaller query results in the cache. On top of this the results in the cache are smaller and will not lock the query cache longer then needed.
The query cache of MySQL uses a single thread that locks the cache on every request. If to many requests connect to the query cache the overall performance will drop.

Using more memory in MySQL Server

Summary:
I haven't yet been able to get MySQL to use more than 1 core for a select statement and it doesn't get above 10 or 15 GB of RAM.
The machine:
I have a dedicated Database server running MariaDB using MySQL 5.6. The machine is strong with 48 cores and 192GB of RAM.
The data:
I have about 250 million rows in one large table (also several other tables ranging from 5-100 million rows). I have been doing a lot of reading from the tables, sometimes inserting into a new table to denormalize the data a bit. I am not setting this system up as a transactional system, rather, it will be used more similarly to a data warehouse with few connections.
The problem:
When I look at my server's stats, it looks like CPU is at around 70% for one core with a select query running, and memory is at about 5-8%. There is no IO waiting, so I am convinced that I have a problem with MySQL memory allocation. After searching on how to increase the usage of memory in MySQL I have noticed that the config file may be the way to increase memory usage.
The solution I have tried based on my online searching:
I have changed the tables to MyISAM engine and added many indexes. This has helped performance, but querying these tables is still incredibly slow. The write speed using load data infile is very fast, however, running a mildly complex select query takes hours or even days.
I have also tried adjusting the following configurations:
key-buffer-size = 64G
read_buffer_size = 1M
join_buffer_size = 4294967295
read_rnd_buffer_size = 2M
key_cache_age_threshold = 400
key_cache_block_size = 800
myisam_data_pointer_size = 7
preload_buffer_size = 2M
sort_buffer_size = 2M
myisam_sort_buffer_size = 10G
bulk_insert_buffer_size = 2M
myisam_repair_threads = 8
myisam_max_sort_file_size = 30G
max-allowed-packet = 256M
tmp-table-size = 32M
max-heap-table-size = 32M
query-cache-type = 0
query-cache-size = 0
max-connections = 500
thread-cache-size = 150
open-files-limit = 65535
table-definition-cache = 1024
table-open-cache = 2048
These config changes have slightly improved the amount of memory being used, but I would like to be able to use 80% of memory or so... or as much as possible to get maximum performance. Any ideas on how to increase the memory allocation to MySQL?
As you have already no IO waiting you are using a good amount of memory. Your buffers also seem quite big. So I would doubt that you can have significant CPU savings with using additional memory. You are limited by the CPU power of a single core.
Two strategies could help:
Use EXPLAIN or query analyzers to find out if you can optimize your queries to save CPU time. Adding missing indexes could help a lot. Sometimes you also might need combined indexes.
Evaluate an alternative storage engine (or even database) that is better suited for analytical queries and can use all of your cores. MariaDB supports InfiniDB but there are also other storage engines and databases available like Infobright, MonetDB.
Use show global variables like "%thread%" and you may get some clues on enabling thread concurrency options.
read_rnd_buffer_size at 2M tested at 16384 with your data may produce significant reduction in time required to complete your query.

How long should it take to build a single column index in MySQL for a 100K row table?

I have been trying to create an index on a varchar(20) column with 100K rows, and it's been running for 30 minutes so far. On an 8 core i7 processor with 16GB of memory and an SSD drive, I just don't understand what's taking it so long.
Any ideas? I'm a bit new to MySQL, but this is just a basic vanilla index on a relatively small table. The one other index on the same table took only a few seconds to generate.
How does one debug this sort of thing in MySQL?
What's the total size in memory of the table? If it's big enough that you're getting a lot of hard drive calls, it could still take a while. Also, is your site live while you're doing this?
As far as debugging goes, you could check your SQL process on the system to see how many resources it's using.
Finally, have you looked at creating a multi-column index rather than two single-column indexes?
It turns out that the default Ubuntu server LAMP install of MySQL has incredibly low memory allocated, requiring an enormous amount of disk swapping, even on machines that have obvious excess memory.
Please note that I did not experiment to see which setting(s) solved the issue, but the commands I was running on 100K rows, which previously ran for hours, now only take seconds.
[mysqld]
key_buffer_size = 256M
max_allowed_packet = 16M
# Added
innodb_buffer_pool_size = 2G
innodb_log_buffer_size = 64M
innodb_log_file_size = 64M
skip_name_resolve
query_cache_limit = 16M
query_cache_size = 64M