MySQL disconnects when innodb_buffer_pool_size > 75% RAM - mysql

For now innodb_buffer_pool_size is set to 12GB (out of 16GB memory), and when I try to increase this value (12.5GB or even up to 13G) to max out performance, MySQL suddenly disconnects itself from the client. I'm having hard time figuring out what would be the issue here.

MySQL uses memory for other needs other than the MySQL innodb buffer pool. By increasing the buffer pool, you are causing MySQL to run out of memory. When MySQL runs out of memory, MySQL automatically shuts down and restarts.
You can see this info in the MySQL error log.
To increase performance, try to look for slow queries (slower than 1-2 seconds) instead, and analyze the explain plan and indexes. Queries that scan many rows and dont use proper indexes cause severe performance issues. This will help a lot more than increasing the buffer pool.

Related

How to clear the database cache in mysql 8 innodb

I am attempting to run the same query multiple times on the same mysql 8 database and table.
I need to carry out experiments to determine if tweaking the query and or table itself improves performance. However after the first attempt the response time is much faster, i assume becuase the data is cached.
mysql 8 innodb
What options do i have to clear the cache so the data is fetched from scratch.
It appears the answers that have been proposed before are all related to mysql 5 and not mysql 8. Most of the commands seem to now be deprecated.
Clear MySQL query cache without restarting server
The question you link to is about the query cache, which is removed in MySQL 8.0 so there's no need to clear it anymore.
Your wording suggests you are asking about the buffer pool, which is not the same as the query cache. The buffer pool caches data and index pages, whereas the query cache (when it existed) cached results of queries.
There is no command to clear the buffer pool without restarting the MySQL Server. Pages remain cached in the buffer pool until they are evicted by other pages.
The buffer pool is in RAM so its contents are cleared if you restart the MySQL Server process. So if you want to start from scratch, you would need to restart that process (you don't need to reboot the whole OS, just restart the MySQL service).
The caveat is that in MySQL 8.0, the contents of the buffer pool are not entirely cleared when you restart. A percentage of the content of the buffer pool is saved during shutdown, and reloaded automatically on startup. This feature is enabled by default, but you can optionally disable it.
Read more about this:
https://dev.mysql.com/doc/refman/8.0/en/innodb-preload-buffer-pool.html
https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_buffer_pool_dump_at_shutdown

Zabbix - The buffer pool utilization is too low

I'm using Zabbix as my Linux monitoring solution.
It shows MySQL - The buffer pool utilization is less than 50% in the last 5 minutes. This means that there is a lot of unused RAM allocated for the buffer pool, which you can easily reallocate at the moment as a warning.
should I worry about this do?
How to overcome this issue?
You have configured your MySQL with more RAM than needed, check your configuration (my.cnf, my.cnf.d and so on) for the innodb_buffer_pool_size and lower it.
How much lower? It depends on the effective usage, you can see it on your Zabbix graphs.
Don't forget to restart the mysql service!
If you are not swapping, and nothing else would benefit from using the RAM that this is wasting, then don't worry. (There's an old saying: "If it ain't broke, don't fix it.")

Ubuntu 14 - MySQL14 - Allocating more RAM

What are the recommended configurations for allocating RAM to MySQL?
My environment: Ubuntu 14 machine with 96G RAM, MySQL Ver 14.14 Distrib 5.7.
And how to set those configurations?
Thank you in advance!
Assuming you are are using InnoDB as engine than please see here https://www.percona.com/blog/2013/09/20/innodb-performance-optimization-basics-updated/
MySQL Innodb Settings
From 5.5 InnoDB is the default engine, so these parameters are even more important for performance than before. The most important ones are:
innodb_buffer_pool_size: InnoDB relies heavily on the buffer pool and should be set correctly, so be sure to allocate enough memory to it. Typically a good value is 70%-80% of available memory. More precisely, if you have RAM bigger than your dataset setting it bit larger should be appropriate with that keep in account of your database growth and re-adjust innodb buffer pool size accordingly. Further, there is improvement in code for InnoDB buffer scalability if you are using Percona Server 5.1 or Percona Server 5.5 You can read more about it here.
innodb_buffer_pool_instances: Multiple innodb buffer pools introduced in InnoDB 1.1 and MySQL 5.5. In MySQL 5.5 the default value for it was 1 which is changed to 8 as new default value in MySQL 5.6. Minimum innodb_buffer_pool_instances should be lie between 1 (minimum) & 64 (maximum). Enabling innodb_buffer_pool_instances is useful in highly concurrent workload as it may reduce contention of the global mutexes.
Dump/Restore Buffer Pool: This feature speed up restarts by saving and restoring the contents of the buffer pool. This feature is first introduced in Percona Server 5.5 you can read about it here. Also Vadim benchmark this feature You can read more about it in this post. Oracle MySQL also introduced it in version 5.6, To automatically dump the database at startup and shutdown set innodb_buffer_pool_dump_at_shutdown & innodb_buffer_pool_load_at_startup parameters to ON.
innodb_log_file_size: Large enough InnoDB transaction logs are crucial for good, stable write performance. But also larger log files means that recovery process will slower in case of crash. However this is not such big issue since great improvements in 5.5. Default value has been changed in MySQL 5.6 to 50 MB from 5 MB (old default), but it’s still too small size for many workloads. Also, in MySQL 5.6, if innodb_log_file_size is changed between restarts then MySQL will automatically resize the logs to match the new desired size during the startup process. Combined log file size is increased to almost 512 GB in MySQL 5.6 from 4 GB. To get the optimal logfile size please check this blog post.
innodb_log_buffer_size: Innodb writes changed data record into lt’s log buffer, which kept in memory and it saves disk I/O for large transactions as it not need to write the log of changes to disk before transaction commit. 4 MB – 8 MB is good start unless you write a lot of huge blobs.
innodb_flush_log_at_trx_commit: When innodb_flush_log_at_trx_commit is set to 1 the log buffer is flushed on every transaction commit to the log file on disk and provides maximum data integrity but it also has performance impact. Setting it to 2 means log buffer is flushed to OS file cache on every transaction commit. The implication of 2 is optimal and improve performance if you are not concerning ACID and can lose transactions for last second or two in case of OS crashes.
innodb_thread_concurrency: With improvements to the InnoDB engine, it is recommended to allow the engine to control the concurrency by keeping it to default value (which is zero). If you see concurrency issues, you can tune this variable. A recommended value is 2 times the number of CPUs plus the number of disks. It’s dynamic variable means it can set without restarting MySQL server.
innodb_flush_method: DIRECT_IO relieves I/O pressure. Direct I/O is not cached, If it set to O_DIRECT avoids double buffering with buffer pool and filesystem cache. Given that you have hardware RAID controller and battery-backed write cache.
innodb_file_per_table: innodb_file_per_table is ON by default from MySQL 5.6. This is usually recommended as it avoids having a huge shared tablespace and as it allows you to reclaim space when you drop or truncate a table. Separate tablespace also benefits for Xtrabackup partial backup scheme.

How to solve mysql warning: "InnoDB: page_cleaner: 1000ms intended loop took XXX ms. The settings might not be optimal "?

I ran a mysql import mysql dummyctrad < dumpfile.sql on server and its taking too long to complete. The dump file is about 5G. The server is a Centos 6, memory=16G and 8core processors, mysql v 5.7 x64-
Are these normal messages/status "waiting for table flush" and the message InnoDB: page_cleaner: 1000ms intended loop took 4013ms. The settings might not be optimal
mysql log contents
2016-12-13T10:51:39.909382Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 4013ms. The settings might not be optimal. (flushed=1438 and evicted=0, during the time.)
2016-12-13T10:53:01.170388Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 4055ms. The settings might not be optimal. (flushed=1412 and evicted=0, during the time.)
2016-12-13T11:07:11.728812Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 4008ms. The settings might not be optimal. (flushed=1414 and evicted=0, during the time.)
2016-12-13T11:39:54.257618Z 3274915 [Note] Aborted connection 3274915 to db: 'dummyctrad' user: 'root' host: 'localhost' (Got an error writing communication packets)
Processlist:
mysql> show processlist \G;
*************************** 1. row ***************************
Id: 3273081
User: root
Host: localhost
db: dummyctrad
Command: Field List
Time: 7580
State: Waiting for table flush
Info:
*************************** 2. row ***************************
Id: 3274915
User: root
Host: localhost
db: dummyctrad
Command: Query
Time: 2
State: update
Info: INSERT INTO `radacct` VALUES (351318325,'kxid ge:7186','abcxyz5976c','user100
*************************** 3. row ***************************
Id: 3291591
User: root
Host: localhost
db: NULL
Command: Query
Time: 0
State: starting
Info: show processlist
*************************** 4. row ***************************
Id: 3291657
User: remoteuser
Host: portal.example.com:32800
db: ctradius
Command: Sleep
Time: 2
State:
Info: NULL
4 rows in set (0.00 sec)
Update-1
mysqlforum ,innodb_lru_scan_depth
changing innodb_lru_scan_depth value to 256 have improved the insert queries execution time + no warning message in log, the default was innodb_lru_scan_depth=1024;
SET GLOBAL innodb_lru_scan_depth=256;
InnoDB: page_cleaner: 1000ms intended loop took 4013ms. The settings might not be optimal. (flushed=1438 and evicted=0, during the time.)
The problem is typical of a MySQL instance where you have a high rate of changes to the database. By running your 5GB import, you're creating dirty pages rapidly. As dirty pages are created, the page cleaner thread is responsible for copying dirty pages from memory to disk.
In your case, I assume you don't do 5GB imports all the time. So this is an exceptionally high rate of data load, and it's temporary. You can probably disregard the warnings, because InnoDB will gradually catch up.
Here's a detailed explanation of the internals leading to this warning.
Once per second, the page cleaner scans the buffer pool for dirty pages to flush from the buffer pool to disk. The warning you saw shows that it has lots of dirty pages to flush, and it takes over 4 seconds to flush a batch of them to disk, when it should complete that work in under 1 second. In other words, it's biting off more than it can chew.
You adjusted this by reducing innodb_lru_scan_depth from 1024 to 256. This reduces how far into the buffer pool the page cleaner thread searches for dirty pages during its once-per-second cycle. You're asking it to take smaller bites.
Note that if you have many buffer pool instances, it'll cause flushing to do more work. It bites off innodb_lru_scan_depth amount of work for each buffer pool instance. So you might have inadvertently caused this bottleneck by increasing the number of buffer pools without decreasing the scan depth.
The documentation for innodb_lru_scan_depth says "A setting smaller than the default is generally suitable for most workloads." It sounds like they gave this option a value that's too high by default.
You can place a limit on the IOPS used by background flushing, with the innodb_io_capacity and innodb_io_capacity_max options. The first option is a soft limit on the I/O throughput InnoDB will request. But this limit is flexible; if flushing is falling behind the rate of new dirty page creation, InnoDB will dynamically increase flushing rate beyond this limit. The second option defines a stricter limit on how far InnoDB might increase the flushing rate.
If the rate of flushing can keep up with the average rate of creating new dirty pages, then you'll be okay. But if you consistently create dirty pages faster than they can be flushed, eventually your buffer pool will fill up with dirty pages, until the dirty pages exceeds innodb_max_dirty_page_pct of the buffer pool. At this point, the flushing rate will automatically increase, and may again cause the page_cleaner to send warnings.
Another solution would be to put MySQL on a server with faster disks. You need an I/O system that can handle the throughput demanded by your page flushing.
If you see this warning all the time under average traffic, you might be trying to do too many write queries on this MySQL server. It might be time to scale out, and split the writes over multiple MySQL instances, each with their own disk system.
Read more about the page cleaner:
Introducing page_cleaner thread in InnoDB (archived copy)
MySQL-5.7 improves DML oriented workloads
The bottleneck is saving data to HDD. Whatever HDD you have: SSD, normal one, NVMe etc.
Note, that this solution applies mostly to InnoDB
I had the same problem, I've applied few solutions.
1st: checking what's wrong
atop -d will show you disk usage. If disk is 'busy', then try to stop all queries to database (but don't stop mysql server service!)
To monitor how many queries you do have, use mytop, innotop or equivalent.
If you have 0 queries, but disk usage is STILL next to 100% from a few seconds / few minutes, then it means, that mysql server is trying to flush dirty pages / do some cleaning as mentioned before (great post of Bill Karwin).
THEN you can try to apply such solutions:
2nd: harware optimisation
If your array is not in RAID 1+0 consider to double speed of saving data using such kind of solution. Try to extend your HDD cotroller possibilities with writing data. Try to use SSD or faster HDD. Applying this soultion depends on your harware and budget possibilities and may vary.
3nd: software tuning
If harware cotroller is working fine, but you want to extend speed of saving data you can set up in mysql config file:
3.1.
innodb_flush_log_at_trx_commit = 2 -> if you/re using innodb tables. It works form my experisnce the best with one table per file:
innodb_file_per_table = 1
3.2.
continuing with InnoDB:
innodb_flush_method = O_DIRECT
innodb_doublewrite = 0
innodb_support_xa = 0
innodb_checksums = 0
Lines above are in general reducing amount of data needed to be saved in HDD, so performance is greater.
3.3
general_log = 0
slow_query_log = 0
Lines above disable saving logs, of course it is yet another amount of data to be saved on HDD
3.4
check again what's happening usit e.g.
tail -f /var/log/mysql/error.log
4th: general notes
General notes:
This was tested under MySQL 5.6 AND 5.7.22
OS: Debian 9
RAID: 1 + 0 SSD drives
Database: InnoDB tables
innodb_buffer_pool_size = 120G
innodb_buffer_pool_instances = 8
innodb_read_io_threads = 64
innodb_write_io_threads = 64
Total amount of RAM in server: 200GB
After doing that you may observe higher CPU usage; that's normal, because writing data is more faster, so then CPU will work harder.
If you're doing that using my.cnf of course don't forget to restart MySQL server.
5th: supplement
Beeing intrigued I did this quirk with:
SET GLOBAL innodb_lru_scan_depth=256;
mentioned above.
Working with big tables I've seen no change in performance.
After corrections above I didn't get rid of warnings, however whole system is working significantly faster.
Everything above is just an experimentation, but I have measured results, it helped me a little, so hopefully it may be useful for others.
This can simply be indicative of poor filesystem performance in general - a symptom of an unrelated problem. In my case I spent an hour researching this, analyzing my system logs, and had nearly reached the point of tweaking the MySQL config, when I decided to check with my cloud based hosting. It turns out there were "abusive I/O spikes from a neighbor." which my host quickly resolved after I brought it to their attention.
My recommendation is to know your baseline / expected filesystem performance, stop MySQL, and measure your filesystem performance to determine if there are more fundamental problems unrelated to MySQL.

MySQL application runs slow after general usage

I am currently using MySQL. I have noticed several times my front application runs slow after some usage. when i checked server status in MySQL workbench. I have noticed that innodb buffer usage was going to 100% . so I increased parameter innodb_buffer_pool_size to 1G in my.ini file of xampp. but innodb is not flushing the buffer and application runs slow after some time. is there any other parameters to change as-well?
consider using a size for innodb_buffer_pool_size of 70%-80% of available ram. Depending on how big your dataset is, you should increase the size.