One of our WordPress plugins require that we increase MySQL buffer length. I cannot find anything on Stackoverflow on clearly how to do this. We are running a VPS with CentOS 7. Any idea how we can increase this value?
As of MySQL 5.6.2, the innodb_change_buffer_max_size configuration
option allows you to configure the maximum size of the change buffer
as a percentage of the total size of the buffer pool. By default,
innodb_change_buffer_max_size is set to 25. The maximum setting is 50.
You might consider increasing innodb_change_buffer_max_size on a MySQL
server with heavy insert, update, and delete activity, where change
buffer merging does not keep pace with new change buffer entries,
causing the change buffer to reach its maximum size limit.
You might consider decreasing innodb_change_buffer_max_size on a MySQL
server with static data used for reporting, or if the change buffer
consumes too much of the memory space that is shared with the buffer
pool, causing pages to age out of the buffer pool sooner than desired.
Test different settings with a representative workload to determine an
optimal configuration. The innodb_change_buffer_max_size setting is
dynamic, which allows you modify the setting without restarting the
server.
you should read this it might help.
https://dev.mysql.com/doc/refman/5.7/en/innodb-change-buffer-maximum-size.html
You need to change the buffer size in server configuration. Refer steps at https://dev.mysql.com/doc/refman/5.7/en/innodb-change-buffer-maximum-size.html and https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_change_buffer_max_size
Related
I've two different MySQL servers with the same database (a copy), both with Ubuntu x64, 4Gb RAM. Both are virtual machines hosted in the same VMWare server.
The first is our old server with MySQL 5.6.33-0ubuntu0.14.04.1-log, and the new one have the version 5.7.17-0ubuntu0.16.04.1 installed.
I'm comparing the performance of some SQL scripts and I noticed that the new server have bigger fetch times with the exact same SQL. Can you help to determinate possible causes?
Maybe the 5.7 engine analyses the SQL in a different and less efficient way?
Maybe some MySQL configuration need to be tuned differently? I only changed innodb_buffer_pool_size = 2G and innodb_buffer_pool_instances = 2 (same as the old server)
Ideas?
Thx
I suspect your problem is that your buffer pool is allocated, but not yet full of data. As you run queries, it has to fetch data from disk, which is much slower than RAM. As you run those queries again and again, the data required will already be in the buffer pool, and MySQL will take advantage of that. Data that is already in the buffer pool can be read without touching the disk.
You can check how much is in your buffer pool. Here's an example from my test instance (I put "..." because the output is long, and I'm showing an excerpt).
mysql> SHOW ENGINE INNODB STATUS\G
...
----------------------
BUFFER POOL AND MEMORY
----------------------
...
Buffer pool size 65528
Free buffers 64173
Database pages 1339
...
These numbers are in "pages" of 16KB each. You can see I have 64*1024 pages = 1GB allocated, but nearly all of it is free, i.e. unoccupied by data. Only 2% of my buffer pool pages have data in them. It's likely that if I run queries now, it will have to read from the disk to load data. Unless perhaps I have very little data in my database on disk too, and it only fills 2% of my buffer pool even when it's fully loaded.
Anyway, assuming you have more data than the size of your buffer pool, it will gradually fill the buffer pool as you run queries. Then you'll see the ratio of "Database pages" to "Free buffers" change over time (I don't know why they say both pages and buffers, since they refer to the same thing). Subsequent queries should run faster.
Imagine we have a MYSQL DB that's data size is 500 MB.
If I will set the innodb_buffer_pool_size at 500MB (or more), is it correct to think that all the data will be cached in RAM, and my queries won't touch disk?
Is effective_cache_size in POSTGRESS is the same as MYSQL's buffer_pool and it also can help avoid reading from disc?
I believe you are on the right track in regards to MySQL innoDB tables. But you must remember that when measuring the size of a database, there are two components: data length and index length.
MySQL database size.
You also have no control over which databases are loaded into memory. If you want to guarantee a particular DB is loaded, then you must make sure the buffer pool is large enough to hold all of them, with some room to spare just in case.
MySQL status variables can then be used to see how the buffer pool is functioning.
I also highly recommend you use the buffer pool load/save variables so that the buffer pool is saved on shutdown and reloaded on startup of the MySQL server. Those variables are available from version 5.6 and up, I believe.
Also, check this out in regards to sizing your buffer pool.
Is "effective_cache_size", a parameter to indicate the planner as to what OS is actually doing ?
http://www.cybertec.at/2013/11/effective_cache_size-better-set-it-right/
and for caching the tables, do we not need to configure "shared_buffers" ?
And with regards to MySQL, yes the "innodb_buffer_pool" size will cache the data for Innodb tables and preventing disc reads. Make sure its configured adequate to hold all the data in memory.
I'm working on a system that incluces exporting large amounts of data into csv files. We are using InnoDB for the our tables. InnoDB buffers previous queries/results in some manor.
Now on a production enviroment that is a really good thing but while testing the performance of an export in my dev enviroment it is not.
The buffer pool size seems to be Around 128MB.
I couldn't find much about this on google except that you can change some MySQL settings when the server boots up.
Anyone knows a workaround of maybe there is a sql statement that prevents it from being put into the buffer?
It's a non-problem (since 5.1.41)
It is impossible to prevent any InnoDB activity from going through the buffer_pool. It is too deeply engrained in the design.
The buffer_pool caches data and index blocks, not queries/results. The Query cache plays with queries/results. But the QC should normally be disabled for production systems.
innodb_old_blocks_pct (default = 37, meaning % of buffer_pool) prevents wiping out the buffer pool from certain operations such as the reads needed for your 'export'.
See http://dev.mysql.com/doc/refman/5.6/en/innodb-parameters.html#sysvar_innodb_old_blocks_pct
and the links in that section.
and what about set the buffer pool to a very small value (ex: 1MB)
When using MyISAM the configuration setting key_buffer_size defines the size of the global buffer where MySQL caches frequently used blocks of index data.
What is the corresponding setting for InnoDB?
innodb_buffer_pool_size is the setting that controls the size of the memory buffer that InnoDB uses to cache indexes and data. It's an important performance option.
See the manual page for the full explanation. The MySQL Performance Blog also has an article about how to choose a proper size for it.
As far as I know, the best setting you can adjust for InnoDB is innodb_buffer_pool_size.
The size in bytes of the memory buffer
InnoDB uses to cache data and indexes
of its tables. The default value is
8MB. The larger you set this value,
the less disk I/O is needed to access
data in tables. On a dedicated
database server, you may set this to
up to 80% of the machine physical
memory size. However, do not set it
too large because competition for
physical memory might cause paging in
the operating system.
The following is my default production MySQL configuration file (my.cnf) for a pure UTF-8 setup with InnoDB as the default storage engine.
[server]
bind-address=127.0.0.1
innodb_file_per_table
default-character-set=utf8
default-storage-engine=innodb
The setup does the following:
Binds to localhost:3306 (loopback) instead of the default *:3306 (all interfaces). Done to increase security.
Sets up one table space per table. Done to increase maintainability.
Sets the default character set to UTF-8. Done to allow for easy internationalization by default.
Sets the default storage engine to InnoDB. Done to allow for row-level-locking by default.
Assume that you could further improve the setup by adding a maximum of three (3) configuration parameters. Which would you add and why?
An improvement would in this context mean either a performance improvement, a reliability improvement or ease-of-use/ease-of-maintainability increase. You can assume that the machine running the MySQL instance will have 1000 MB of RAM.
To cache more data:
innodb_buffer_pool_size = 512M
If you write lots of data:
innodb_log_file_size = 128M
, to avoid too much log switching.
There is no third I'd add in any case, all other depend.
Allocating more memory than the default of 8M to InnoDB (using innodb_buffer_pool_size) is surely an enhancement. Regarding the value, on a dedicated database server as yours you can set it up to the 80% of your RAM and the higher you set this value, the fewer the interactions with the hard disk will be. Just to give my two cents, I'd like to mention that you can have some performance boost tweaking the value of innodb_flush_log_at_trx_commit, however sacrificing ACID compliance... According to the MySQL manual:
If the value of
innodb_flush_log_at_trx_commit is 0,
the log buffer is written out to the
log file once per second and the flush
to disk operation is performed on the
log file, but nothing is done at a
transaction commit.
So you might loose some data that were not written properly in the database due to a crash or any malfunction. Again according to the MySQL manual:
However, InnoDB's crash recovery is
not affected and thus crash recovery
does work regardless of the value.
So, I would suggest:
innodb_flush_log_at_trx_commit = 0
Finally if you have a high connection rate (i.e. if you need to configure MySQL to support a web application that accesses the database) then you should consider increasing the maximum number of connections to something like 500. But since this is something more or less trivial and well known, so I'd like to emphasize on the importance of back_log to ensure connectivity.
I hope these information will help you optimize your database server.
Increase the innodb buffer pool size, as big as you can practically make it:
innodb_buffer_pool_size=768M
You'll also want some key buffer space for temp tables:
key_buffer_size=32M
Others would depend on what you are doing with the database, but table_cache or query_cache_size would be a couple other potentials.