MariaDB 10.3.22 innodb_status_output keeps turning on - mysql

MariaDB 10.3.22 innodb_status_output keeps turning on automatically
Per MySQL docs, https://dev.mysql.com/doc/refman/8.0/en/innodb-troubleshooting.html,
"InnoDB temporarily enables standard InnoDB Monitor output under the following conditions:
1.A long semaphore wait
2.InnoDB cannot find free blocks in the buffer pool
3.Over 67% of the buffer pool is occupied by lock heaps or the adaptive hash index "
MariaDB docs don't mention "InnoDB temporarily enables standard InnoDB Monitor", https://mariadb.com/kb/en/xtradb-innodb-monitors/
Running the commands below does turn off the monitors, but they come back on, probably due to the conditions mentioned above:
SET GLOBAL innodb_status_output=OFF;
SET GLOBAL innodb_status_output_locks=OFF;
I'd like to prevent MariaDB from temporarily turning on InnoDB Monitor. I understand we could fix our db to prevent the conditions above, but we'd like to not have InnoDB Monitor turned on automatically. -Thanks for the help.

I didn't find an exact solution. I had to turn off all logging to stop the output of "standard InnoDB Monitor". I could not find a way to output error logging without MariaDB automatically turning on "standard InnoDB Monitor output". Note that "standard InnoDB Monitor" outputs millions of lines per day for a medium active db. I'm still interested if anyone should find a solution, thanks -K

Related

How to clear the database cache in mysql 8 innodb

I am attempting to run the same query multiple times on the same mysql 8 database and table.
I need to carry out experiments to determine if tweaking the query and or table itself improves performance. However after the first attempt the response time is much faster, i assume becuase the data is cached.
mysql 8 innodb
What options do i have to clear the cache so the data is fetched from scratch.
It appears the answers that have been proposed before are all related to mysql 5 and not mysql 8. Most of the commands seem to now be deprecated.
Clear MySQL query cache without restarting server
The question you link to is about the query cache, which is removed in MySQL 8.0 so there's no need to clear it anymore.
Your wording suggests you are asking about the buffer pool, which is not the same as the query cache. The buffer pool caches data and index pages, whereas the query cache (when it existed) cached results of queries.
There is no command to clear the buffer pool without restarting the MySQL Server. Pages remain cached in the buffer pool until they are evicted by other pages.
The buffer pool is in RAM so its contents are cleared if you restart the MySQL Server process. So if you want to start from scratch, you would need to restart that process (you don't need to reboot the whole OS, just restart the MySQL service).
The caveat is that in MySQL 8.0, the contents of the buffer pool are not entirely cleared when you restart. A percentage of the content of the buffer pool is saved during shutdown, and reloaded automatically on startup. This feature is enabled by default, but you can optionally disable it.
Read more about this:
https://dev.mysql.com/doc/refman/8.0/en/innodb-preload-buffer-pool.html
https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_buffer_pool_dump_at_shutdown

How to solve mysql warning: "InnoDB: page_cleaner: 1000ms intended loop took XXX ms. The settings might not be optimal "?

I ran a mysql import mysql dummyctrad < dumpfile.sql on server and its taking too long to complete. The dump file is about 5G. The server is a Centos 6, memory=16G and 8core processors, mysql v 5.7 x64-
Are these normal messages/status "waiting for table flush" and the message InnoDB: page_cleaner: 1000ms intended loop took 4013ms. The settings might not be optimal
mysql log contents
2016-12-13T10:51:39.909382Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 4013ms. The settings might not be optimal. (flushed=1438 and evicted=0, during the time.)
2016-12-13T10:53:01.170388Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 4055ms. The settings might not be optimal. (flushed=1412 and evicted=0, during the time.)
2016-12-13T11:07:11.728812Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 4008ms. The settings might not be optimal. (flushed=1414 and evicted=0, during the time.)
2016-12-13T11:39:54.257618Z 3274915 [Note] Aborted connection 3274915 to db: 'dummyctrad' user: 'root' host: 'localhost' (Got an error writing communication packets)
Processlist:
mysql> show processlist \G;
*************************** 1. row ***************************
Id: 3273081
User: root
Host: localhost
db: dummyctrad
Command: Field List
Time: 7580
State: Waiting for table flush
Info:
*************************** 2. row ***************************
Id: 3274915
User: root
Host: localhost
db: dummyctrad
Command: Query
Time: 2
State: update
Info: INSERT INTO `radacct` VALUES (351318325,'kxid ge:7186','abcxyz5976c','user100
*************************** 3. row ***************************
Id: 3291591
User: root
Host: localhost
db: NULL
Command: Query
Time: 0
State: starting
Info: show processlist
*************************** 4. row ***************************
Id: 3291657
User: remoteuser
Host: portal.example.com:32800
db: ctradius
Command: Sleep
Time: 2
State:
Info: NULL
4 rows in set (0.00 sec)
Update-1
mysqlforum ,innodb_lru_scan_depth
changing innodb_lru_scan_depth value to 256 have improved the insert queries execution time + no warning message in log, the default was innodb_lru_scan_depth=1024;
SET GLOBAL innodb_lru_scan_depth=256;
InnoDB: page_cleaner: 1000ms intended loop took 4013ms. The settings might not be optimal. (flushed=1438 and evicted=0, during the time.)
The problem is typical of a MySQL instance where you have a high rate of changes to the database. By running your 5GB import, you're creating dirty pages rapidly. As dirty pages are created, the page cleaner thread is responsible for copying dirty pages from memory to disk.
In your case, I assume you don't do 5GB imports all the time. So this is an exceptionally high rate of data load, and it's temporary. You can probably disregard the warnings, because InnoDB will gradually catch up.
Here's a detailed explanation of the internals leading to this warning.
Once per second, the page cleaner scans the buffer pool for dirty pages to flush from the buffer pool to disk. The warning you saw shows that it has lots of dirty pages to flush, and it takes over 4 seconds to flush a batch of them to disk, when it should complete that work in under 1 second. In other words, it's biting off more than it can chew.
You adjusted this by reducing innodb_lru_scan_depth from 1024 to 256. This reduces how far into the buffer pool the page cleaner thread searches for dirty pages during its once-per-second cycle. You're asking it to take smaller bites.
Note that if you have many buffer pool instances, it'll cause flushing to do more work. It bites off innodb_lru_scan_depth amount of work for each buffer pool instance. So you might have inadvertently caused this bottleneck by increasing the number of buffer pools without decreasing the scan depth.
The documentation for innodb_lru_scan_depth says "A setting smaller than the default is generally suitable for most workloads." It sounds like they gave this option a value that's too high by default.
You can place a limit on the IOPS used by background flushing, with the innodb_io_capacity and innodb_io_capacity_max options. The first option is a soft limit on the I/O throughput InnoDB will request. But this limit is flexible; if flushing is falling behind the rate of new dirty page creation, InnoDB will dynamically increase flushing rate beyond this limit. The second option defines a stricter limit on how far InnoDB might increase the flushing rate.
If the rate of flushing can keep up with the average rate of creating new dirty pages, then you'll be okay. But if you consistently create dirty pages faster than they can be flushed, eventually your buffer pool will fill up with dirty pages, until the dirty pages exceeds innodb_max_dirty_page_pct of the buffer pool. At this point, the flushing rate will automatically increase, and may again cause the page_cleaner to send warnings.
Another solution would be to put MySQL on a server with faster disks. You need an I/O system that can handle the throughput demanded by your page flushing.
If you see this warning all the time under average traffic, you might be trying to do too many write queries on this MySQL server. It might be time to scale out, and split the writes over multiple MySQL instances, each with their own disk system.
Read more about the page cleaner:
Introducing page_cleaner thread in InnoDB (archived copy)
MySQL-5.7 improves DML oriented workloads
The bottleneck is saving data to HDD. Whatever HDD you have: SSD, normal one, NVMe etc.
Note, that this solution applies mostly to InnoDB
I had the same problem, I've applied few solutions.
1st: checking what's wrong
atop -d will show you disk usage. If disk is 'busy', then try to stop all queries to database (but don't stop mysql server service!)
To monitor how many queries you do have, use mytop, innotop or equivalent.
If you have 0 queries, but disk usage is STILL next to 100% from a few seconds / few minutes, then it means, that mysql server is trying to flush dirty pages / do some cleaning as mentioned before (great post of Bill Karwin).
THEN you can try to apply such solutions:
2nd: harware optimisation
If your array is not in RAID 1+0 consider to double speed of saving data using such kind of solution. Try to extend your HDD cotroller possibilities with writing data. Try to use SSD or faster HDD. Applying this soultion depends on your harware and budget possibilities and may vary.
3nd: software tuning
If harware cotroller is working fine, but you want to extend speed of saving data you can set up in mysql config file:
3.1.
innodb_flush_log_at_trx_commit = 2 -> if you/re using innodb tables. It works form my experisnce the best with one table per file:
innodb_file_per_table = 1
3.2.
continuing with InnoDB:
innodb_flush_method = O_DIRECT
innodb_doublewrite = 0
innodb_support_xa = 0
innodb_checksums = 0
Lines above are in general reducing amount of data needed to be saved in HDD, so performance is greater.
3.3
general_log = 0
slow_query_log = 0
Lines above disable saving logs, of course it is yet another amount of data to be saved on HDD
3.4
check again what's happening usit e.g.
tail -f /var/log/mysql/error.log
4th: general notes
General notes:
This was tested under MySQL 5.6 AND 5.7.22
OS: Debian 9
RAID: 1 + 0 SSD drives
Database: InnoDB tables
innodb_buffer_pool_size = 120G
innodb_buffer_pool_instances = 8
innodb_read_io_threads = 64
innodb_write_io_threads = 64
Total amount of RAM in server: 200GB
After doing that you may observe higher CPU usage; that's normal, because writing data is more faster, so then CPU will work harder.
If you're doing that using my.cnf of course don't forget to restart MySQL server.
5th: supplement
Beeing intrigued I did this quirk with:
SET GLOBAL innodb_lru_scan_depth=256;
mentioned above.
Working with big tables I've seen no change in performance.
After corrections above I didn't get rid of warnings, however whole system is working significantly faster.
Everything above is just an experimentation, but I have measured results, it helped me a little, so hopefully it may be useful for others.
This can simply be indicative of poor filesystem performance in general - a symptom of an unrelated problem. In my case I spent an hour researching this, analyzing my system logs, and had nearly reached the point of tweaking the MySQL config, when I decided to check with my cloud based hosting. It turns out there were "abusive I/O spikes from a neighbor." which my host quickly resolved after I brought it to their attention.
My recommendation is to know your baseline / expected filesystem performance, stop MySQL, and measure your filesystem performance to determine if there are more fundamental problems unrelated to MySQL.

MySQL max_connections

I have setup with Linux, Debian Jessie with Mysql 5.7.13 installed.
I have set following settings in
my.cnf: default_storage_engine= innodb, innodb_buffer_pool_size= 44G
When I start MySQL I manually set max_connections with SET GLOBAL max_connections = 1000;
Then I trigger my loadtest that sends a lot of traffic to the DB server which mostly consists of slow/bad queries.
The result I expected was that I would reach close to 1000 connections but somehow MySQL limits it to 462 connections and I can not find the setting that is responsible for this limit. We are not even close to maxing out the CPU or Memory.
If you have any idea or could point me in a direction where you think the error might be it would be really helpful.
What loadtest did you use? Are you sure that it can utilize about thousands of connections?
You may maxing out your server resources in the disk IO area, especially if you're talking about lot of slow/bad queries. Did you check for disk utilization on your server?
Even if your InnoDB pool size is large your DB still need to read your DB to the cache first, and if your entire DB is large it will not help you.
I can recommend you to perform such a test once more time and track your disk performance during loadtest using iostat or iotop utility.
Look here for more examples of the server performance troubleshooting.
I found the issue, it was du to limitation of Apache server, there is a "hidden" setting inside /etc/apache2/mods-enabled/mpm_prefork.conf which will overwrite setting inside /etc/apache2/apache2.conf
Thank you!

Adding Fulltext index crashes MySQL Service

I'm trying to add a Fulltext index to a table. When I run the query,
ALTER TABLE thisTable ADD FULLTEXT(thisText);
I get the message
SQL Error (2013): Lost connection to MySQL server during query
and the mysql service will indeed be stopped. If I restart the service and try to add the index again I get another error.
SQL Error (1813): Tablespace for table 'thisTable/#sql-ib21134' exists. Please DISCARD the tablespace before IMPORT.
The engine is InnoDb and I run MySQL 5.6.12, so Fulltext index should be supported. The column is a TEXT column.
I'd be very grateful if someone could point me in the right direction where the error is coming from.
The problem is related to sort buffer size. It is known bug of mysql/mariadb/percona.
Even after many months I have reported this bug it was not fixed (we are using latest Mariadb)
The second error happens because the table (or the fulltext index table) was partially modified (or created) when the server crashed. Drop and re-create your table from scratch.
Now, why did the server crash? Hard to tell for sure, but it is likely that some buffer reached capacity. The usual suspect is innodb_buffer_pool_size. Try to increase it progressively.
On Ubuntu server 20.04,
despite reasonable logic to increase innodb_buffer_pool_size I have idenified that OOM Killer killed mysql when that buffer is huge.
OOM Killer is a function of the Linux kernel that is intended to kill rogue processes that are requesting more memory that the OS can allocate, so that the system can survive.
Excerpt from syslog:
oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/mysql.service,task=mysqld,pid=98524,uid=114
Jan 10 06:55:40 vps-5520a8d5 kernel: [66118.230690] Out of memory:
Killed process 98524 (mysqld) total-vm:9221052kB, anon-rss:5461436kB,
file-rss:0kB, shmem-rss:0kB, UID:114 pgtables:11232kB oom_score_adj:0
So this means, due tu huge innodb_buffer_pool_size setting, it is trying to allocate too big chunks of memory, too much agressiveley.
So using that fact, logic dictates to reduce size or set default values for innodb_buffer_pool_size, I have completley removed it from /etc/mysql/mysql.conf.d/mysqld.conf then restarted server with
systemctl restart mysql
and now adding fulltext on my 61M table is not crashing.
Bad luck pal...
InnoDB tables do not support FULLTEXT indexes.
Source - http://dev.mysql.com/doc/refman/5.5/en/innodb-restrictions.html

How to solve error 122 from storage engine?

Got this from some mysql queries, puzzled since error 122 is usually a 'out of space' error but there's plenty of space left on the server... any ideas?
The answer: for some reason Mysql had its tmp tables on the /tmp partition which was limited to 100M, and was filled up by eaccelerator cache to 100M even though eaccel is limited to 16M of usage. Very weird, but I just moved eaccel cache elsewhere and problem solved.
Error 122 often indicates a "Disk over quota" error. Is it possible disk quotas exist on the server?
Try to turn off the disk quota using the quotaoff command.
Using the -a flag will turn off all file system quotas.
quotaoff -a
are you using innodb tables? if so, you might not have auto-grow turned on and inno can't expand the table space any more.
if these are myisam tables and it only happens on specific tables, i would suspect corruption. do a REPAIR on the tables in question.
I resolve this issue by increasing my disk size.
try df -h to check whether there are enough disk space on your server.