Problem improving MySQL. (innodb_log_file_size) - mysql

we are trying to improve the efficiency of our database server.
One of the recommendations of MySQLTunner is rising the innodb_log_file_size to 12 GB. As we see this change could improve significantly the speed and the performance of our queries. The problem came when we increase this parameter more than 1 GB, the service wont start, we delete the logs, stop it cleanly, and still cant start with this parameter above 1 GB.
Someinfo:
mysql Ver 14.14 Distrib 5.5.62, for debian-linux-gnu
innodb_buffer_pool_size = 100G
innodb_file_per_table = ON
innodb_buffer_pool_instances = 64
innodb_stats_on_metadata = OFF
innodb_log_file_size = 1G
innodb_log_buffer_size = 8M
Theres algo enough space in partitions for keeping this log size
Thanks!

In MySQL 5.5, you can't increase the innodb log file size over 4GB total. The innodb_log_file_size can only be 4GB / innodb_log_files_in_group (which is 2 by default, and there's no benefit to change that). So you can set the log file to a max of 2GB.
See https://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_log_file_size
The max combined size of the log files increased to 512GB in 5.6.3. Again, innodb_log_file_size should be the size of one log file, so if you use multiple log files, the total cannot exceed 512GB.
I agree with Rick James' answer that increasing the log file size is not a magical solution to make queries run faster. It won't do that.
It's sometimes useful to increase the innodb log file size, if the bottleneck is that you run out of log space faster than dirty pages can be flushed to the tablespace, because you have very high write traffic. That's affected by the rate of writes, not the speed of individual writes.
For most apps, two 2GB log files is more than enough. If it isn't, it's probably time to run multiple MySQL instances, and distribute your write traffic over them as evenly as you can.

It's tricky to change the log_file_size in that old version. For the steps, see https://dba.stackexchange.com/questions/1261/how-to-safely-change-mysql-innodb-variable-innodb-log-file-size/4103#4103
However, I predict that changing the log_file_size will not help much. "Performance" usually implies a few slow queries. Do you have the slowlog turned on? with a low value of long_query_time? Find the few worst queries; we can probably improve performance by tackling them. Steps on that: http://mysql.rjweb.org/doc.php/mysql_analysis

If you change the innodb_log_file_size parameter, you need to remove the old log files. Otherwise, Innodb won't start successfully if the existing files do not match the specified size in the config file.
On the other hand, The innodb_buffer_pool_size should be set to about 70% of available RAM if you are running InnoDB only.

Related

MariaDB my.cnf settings medium->heavy traffic

Could someone help me with a problem I have!? It is involving MariaDB configuration.
I have a server that has E5-Xeon CPU, 96GB DDR3 RAM, SSD Storage Space (1.2TB).
Recently something weird is happening. Some pages load very slow, other go instant. The pages that load slow include SELECT or INSERT queries.
Most of the tables are using MyISAM, but i also have InnoDB.
My.cnf file is kinda the default one and I was wondering what settings should i use.
I am using MySQL version 10.0.23-MariaDB.
The site has around 15.000 members. But never more than 1500-2500 online at the same time.
Thank you for any help i get :)
There are far too many possibilities to answer your question without more info. But here are the 'most important' settings:
For 96GB RAM and a mixture of InnoDB and MyISAM:
innodb_buffer_pool_size = 32G
innodb_buffer_pool_instances = 16
key_buffer_size = 9G
The key_buffer does not need to be bigger than the sum all MyISAM indexes. Reference.
For more info, turn on the slow log, wait a while, then summarize using pt-query-digest or mysqldumpslow -s t to see the top couple of queries. Then focus on optimizing them in some way. Often it is simple as devising the optimal composite index.
What is Max_used_connections? If it is really 1500-2500, then you have one set of issues.
Do not set query_cache_size bigger than, say, 100M. That is a known performance killer.
If you tweaked any other 'variables', fess up.
For further critique of your settings, provide me with SHOW VARIABLES and SHOW GLOBAL STATUS.
MyISAM only has "table locking", which can slow things down; converting to InnoDB is likely to help. More discussion.

what is the best query_cache_size / Ram ratio

hi I want to change my /etc/my.cnf file (mysql's config file).
What should the below values for better performance on my queries.
query_cache_type = 1
query_cache_limit = 1M
query_cache_size = 16M
Is there an optimal ratio for cache_size / RAM ? I have 8GB of ram on my ubuntu machine.
If there were a well-defined optimum there would be no need for a configuration option. MySQL would use that optimum by default. The query cache is also only useful for very specific circumstances (you read a lot more from the table than you write to it) because the cache is emptied on a per-table basis every time you write anything to the table. It also only works if you state the exact same queries, with the same parameters, over and over.
The optimal value for you needs to be measured out and depends a lot on your use case. If you have a lot of InnoDB tables you will get much more use out of the InnoDB buffer pool: innodb_buffer_pool_size. Set this variable as high as possible (and on a MySQL-only, InnoDB-only machine this might mean as much as 80% of your available RAM).
We host hundreds of small websites on our 8GB RAM server, which runs both database and web server on the same machien, with a mixture of MyISAM and InnoDB tables. Here is our configuration for comparison:
innodb_file_per_table=1
open_files_limit=50000
max_allowed_packet=268435456
innodb_buffer_pool_size=1G
innodb_log_file_size=256M
innodb_flush_method=O_DIRECT
innodb_io_capacity=1000
innodb_old_blocks_time=1000
innodb_open_files=5000
key_buffer_size=16M
read_buffer_size=256K
read_rnd_buffer_size=256K
query_cache_size=256M
query_cache_limit=5M
join_buffer_size=4M
sort_buffer_size=4M
max_heap_table_size=64M
tmp_table_size=64M
table_open_cache=4500
table_definition_cache=4000
thread_cache_size=50
If your machine has a lot of writes, turn the Query cache completely off (type=0, size=0). That is because every write to a table causes all entries in the QC for that table to be removed.
As a corollary to that, having too big a QC can be "slow". I recommend no more than 50M for query_cache_size.
I hope that explains why I did not address your title question about percent of RAM.
This depends on the size of the query results. If you have a query cache limit of 5M and query_cache_size of 256M a worst case scenario will let you end up with 55 query results of 5M in your cache.
Depending on the type of queries you run most you are better of setting a smaller query_cache_limit (64k) giving you a total of 4096 smaller query results in the cache. On top of this the results in the cache are smaller and will not lock the query cache longer then needed.
The query cache of MySQL uses a single thread that locks the cache on every request. If to many requests connect to the query cache the overall performance will drop.

Can increasing Mysql variables tmp_table_size and max_heap_table_size crash a VPS?

I am experiencing very slow performance on my website due to MySql copying temporary tables to disk.
Increasing tmp_table_size and max_heap_table size solved this issue. The VPS has 1.5 GB RAM. How high can I safely set these variables?
Keep in mind that each thread can have its own temp table in memory (or even multiple temp tables if you have an extremely complex query), so you may have spikes of usage of memory.
If you increase tmp_table_size and max_heap_table_size too much, you risk forcing the mysqld process to swap to virtual memory, which will be no better than your original problem of temp tables going to disk.
You don't need to make tmp_table_size equal to the largest temp table you will ever create, you just need it to be large enough so that the majority of temp tables stay in RAM.
I would recommend increase those config variables modestly, and then monitor SHOW GLOBAL STATUS LIKE 'Created_tmp_disk_tables' to see if the rate of increase of that status indicator is reduced.
You can use a tool like pt-mext to monitor the rate of change of a status variable, or else monitor using Cacti with Percona Monitoring Plugins.

Is tuning the innodb_buffer_pool_size important on Solaris ZFS?

We're running a moderate size (350GB) database with some fairly large tables (a few hundred million rows, 50GB) on a reasonably large server (2 x quad-core Xeons, 24GB RAM, 2.5" 10k disks in RAID10), and are getting some pretty slow inserts (e.g. simple insert of a single row taking 90 seconds!).
Our innodb_buffer_pool_size is set to 400MB, which would normally be way too low for this kind of setup. However, our hosting provider advises that this is irrelevant when running on ZFS. Is he right?
(Apologies for the double post on https://dba.stackexchange.com/questions/1975/is-tuning-the-innodb-buffer-pool-size-important-on-solaris-zfs, but I'm not sure how big the audience is over there!)
Your hosting provider is incorrect. There are various things you should tune differently when running MySQL on ZFS, but reducing the innodb_buffer_pool_size is not one of them. I wrote an article on the subject of running MySQL on ZFS and gave a lecture on it a while back. Specifically regarding innodb_buffer_pool_size, what you should do is set it to whatever would be reasonable on any other file system, and because O_DIRECT doesn't mean "don't cache" on ZFS, you should set primarycache=metadata on your ZFS file system containing your datadir. There are other optimisations to be made, which you can find in the article and the lecture slides.
I would still set the innodb_buffer_pool_size much higher that 400M. The reason? InnoDB Buffer Pool will still cache the data and index pages you need for tables accessed frequently.
Run this query to get the recommended innodb_buffer_pool_size in MB:
SELECT CONCAT(ROUND(KBS/POWER(1024,IF(pw<0,0,IF(pw>3,0,pw)))+0.49999),SUBSTR(' KMG',IF(pw<0,0,IF(pw>3,0,pw))+1,1)) recommended_innodb_buffer_pool_size FROM (SELECT SUM(data_length+index_length) KBS FROM information_schema.tables WHERE engine='InnoDB') A,(SELECT 2 pw) B;
Simply use either the result of this query or 80% of installed RAM (in your case 19660M) whichever is smaller.
I would also set the innodb_log_file_size to 25% of the InnoDB Buffer Pool size. Unfortunately, the maximum value of innodb_log_file_size is 2047M. (1M short of 2G) Thus, set innodb_log_file_size to 2047M since 25% of innodb_buffer_pool_size of my recommendated setting is 4915M.
Yet another recommedation is to disable ACID compliance. Use either 0 or 2 for innodb_flush_log_at_trx_commit (default is 1 which support ACID compliance) This will produce faster InnoDB writes AT THE RISK of losing up to 1 second's worth of transactions in the event of a crash.
May be worth reading slow-mysql-inserts if you haven't already. Also this link to the mysql docs on the matter - especially with regards to considering a transaction if you are doing multiple inserts to a large table.
More relevant is this mysql article on performance of innodb and zfs which specifically considers the buffer pool size.
The headline conclusion is;
With InnoDB, the ZFS performance curve suggests a new strategy of "set the buffer pool size low, and let ZFS handle the data buffering."
You may wish to add some more detail such as the number / complexity of the indexes on the table - this can obviously make a big difference.
Apologies for this being rather generic advice rather than from personal experience, I haven't run zfs in anger but hope some of those links might be of use.

Optimal MySQL-configuration (my.cnf)

The following is my default production MySQL configuration file (my.cnf) for a pure UTF-8 setup with InnoDB as the default storage engine.
[server]
bind-address=127.0.0.1
innodb_file_per_table
default-character-set=utf8
default-storage-engine=innodb
The setup does the following:
Binds to localhost:3306 (loopback) instead of the default *:3306 (all interfaces). Done to increase security.
Sets up one table space per table. Done to increase maintainability.
Sets the default character set to UTF-8. Done to allow for easy internationalization by default.
Sets the default storage engine to InnoDB. Done to allow for row-level-locking by default.
Assume that you could further improve the setup by adding a maximum of three (3) configuration parameters. Which would you add and why?
An improvement would in this context mean either a performance improvement, a reliability improvement or ease-of-use/ease-of-maintainability increase. You can assume that the machine running the MySQL instance will have 1000 MB of RAM.
To cache more data:
innodb_buffer_pool_size = 512M
If you write lots of data:
innodb_log_file_size = 128M
, to avoid too much log switching.
There is no third I'd add in any case, all other depend.
Allocating more memory than the default of 8M to InnoDB (using innodb_buffer_pool_size) is surely an enhancement. Regarding the value, on a dedicated database server as yours you can set it up to the 80% of your RAM and the higher you set this value, the fewer the interactions with the hard disk will be. Just to give my two cents, I'd like to mention that you can have some performance boost tweaking the value of innodb_flush_log_at_trx_commit, however sacrificing ACID compliance... According to the MySQL manual:
If the value of
innodb_flush_log_at_trx_commit is 0,
the log buffer is written out to the
log file once per second and the flush
to disk operation is performed on the
log file, but nothing is done at a
transaction commit.
So you might loose some data that were not written properly in the database due to a crash or any malfunction. Again according to the MySQL manual:
However, InnoDB's crash recovery is
not affected and thus crash recovery
does work regardless of the value.
So, I would suggest:
innodb_flush_log_at_trx_commit = 0
Finally if you have a high connection rate (i.e. if you need to configure MySQL to support a web application that accesses the database) then you should consider increasing the maximum number of connections to something like 500. But since this is something more or less trivial and well known, so I'd like to emphasize on the importance of back_log to ensure connectivity.
I hope these information will help you optimize your database server.
Increase the innodb buffer pool size, as big as you can practically make it:
innodb_buffer_pool_size=768M
You'll also want some key buffer space for temp tables:
key_buffer_size=32M
Others would depend on what you are doing with the database, but table_cache or query_cache_size would be a couple other potentials.