we have installed mariadb along with columnstore engine and from the last few weeks we are facing memory chocking issue where memory getting chocked and all our DML/DDL operations are getting stuck, after restarting the services it gets fixed.
below are the stats :
total used free shared buff/cache available
Mem: 15 2 7 0 5 12
Swap: 4 0 4
[mysqld]
port = 3306
socket = /opt/evolv/mariadb/columnstore/mysql/lib/mysql/mysql.sock
datadir = /opt/evolv/mariadb/columnstore/mysql/db
skip-external-locking
key_buffer_size = 512M
max_allowed_packet = 1M
table_cache = 512
sort_buffer_size = 64M
read_buffer_size = 64M
read_rnd_buffer_size = 512M
myisam_sort_buffer_size = 64M
thread_cache_size = 8
query_cache_size = 0
# Try number of CPU's*2 for thread_concurrency
#thread_concurrency = 8
thread_stack = 512K
lower_case_table_names=1
group_concat_max_len=512
infinidb_use_import_for_batchinsert=1
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 8192M
#innodb_additional_mem_pool_size = 20M
# Set .._log_file_size to 25 % of buffer pool size
#innodb_log_file_size = 100M
#innodb_log_buffer_size = 8M
#innodb_flush_log_at_trx_commit = 1
#innodb_lock_wait_timeout = 50
Here's an analysis of the VARIABLES and (suspicious) GLOBAL STATUS; nothing exciting:
Observations:
Version: 10.1.26-MariaDB
15 GB of RAM
Uptime = 03:04:25; Please rerun SHOW GLOBAL STATUS after several hours.
Are you sure this was a SHOW GLOBAL STATUS ?
You are not running on Windows.
Running 64-bit version
You appear to be running entirely (or mostly) InnoDB.
The More Important Issues:
Uptime = 03:04:25; Please rerun SHOW GLOBAL STATUS after several hours.
Are you sure this was a SHOW GLOBAL STATUS ?
key_buffer_size is excessively large (3G). If you don't need MyISAM for anything, set it to 50M.
Check infinidb_um_mem_limit to see if it makes sense for your application.
Suggest lowering innodb_buffer_pool_size to 2G until the "choking" is figured out.
Details and other observations:
( (key_buffer_size - 1.2 * Key_blocks_used * 1024) / _ram ) = (3072M - 1.2 * 0 * 1024) / 15360M = 20.0% -- Percent of RAM wasted in key_buffer.
-- Decrease key_buffer_size.
( Key_blocks_used * 1024 / key_buffer_size ) = 0 * 1024 / 3072M = 0 -- Percent of key_buffer used. High-water-mark.
-- Lower key_buffer_size to avoid unnecessary memory usage.
( innodb_buffer_pool_size / _ram ) = 6144M / 15360M = 40.0% -- % of RAM used for InnoDB buffer_pool
( Innodb_buffer_pool_pages_free * 16384 / innodb_buffer_pool_size ) = 392,768 * 16384 / 6144M = 99.9% -- buffer pool free
( innodb_print_all_deadlocks ) = innodb_print_all_deadlocks = OFF -- Whether to log all Deadlocks.
-- If you are plagued with Deadlocks, turn this on. Caution: If you have lots of deadlocks, this may write a lot to disk.
( local_infile ) = local_infile = ON
-- local_infile = ON is a potential security issue
( expire_logs_days ) = 0 -- How soon to automatically purge binlog (after this many days)
-- Too large (or zero) = consumes disk space; too small = need to respond quickly to network/machine crash.
(Not relevant if log_bin = OFF)
( long_query_time ) = 5 -- Cutoff (Seconds) for defining a "slow" query.
-- Suggest 2
Abnormally large:
read_buffer_size = 32MB
Acl_database_grants = 780
Acl_proxy_users = 4
Acl_users = 281
Columstore.xml
95% of all memory??
<MemoryCheckPercent>95</MemoryCheckPercent> <!-- Max real memory to limit growth of buffers to -->
<DataFileLog>OFF</DataFileLog>
I guess this is not relevant, since it is commented out??
<!-- enable if you want to limit how much memory may be used for hdfs read/write memory buffers.
<hdfsRdwrBufferMaxSize>8G</hdfsRdwrBufferMaxSize>
-->
Keep in mind that MySQL, other than Columnstore, is consuming a lot of memory:
<TotalUmMemory>25%</TotalUmMemory>
<TotalPmUmMemory>10%</TotalPmUmMemory>
Related
I have 3 nodes Galera cluster with MariaDB 10.4.13. Each node have 32GB RAM, and 2GB Swap. After my mysql tuning about 1 month ago each node memory almost full, but I think it is ok. But the last few days Swap size reached maximum and does not go down. My my.cnf looks like this:
####Slow logging
slow_query_log_file=/var/lib/mysql/mysql-slow.log
long_query_time=2
slow_query_log=ON
log_queries_not_using_indexes=ON
############ INNODB OPTIONS
innodb_buffer_pool_size=24000M
innodb_flush_log_at_trx_commit=2
innodb_file_per_table=1
innodb_data_file_path=ibdata1:100M:autoextend
innodb_read_io_threads=4
innodb_write_io_threads=4
innodb_doublewrite=1
innodb_log_file_size=6144M
innodb_log_buffer_size=96M
innodb_buffer_pool_instances=24
innodb_log_files_in_group=2
innodb_thread_concurrency=0
#### innodb_file_format = barracuda
innodb_flush_method = O_DIRECT
#### innodb_locks_unsafe_for_binlog = 1
innodb_autoinc_lock_mode=2
######## avoid statistics update when doing e.g show tables
innodb_stats_on_metadata=0
default_storage_engine=innodb
innodb_strict_mode = 0
#### OTHER THINGS, BUFFERS ETC
#### key_buffer_size = 24M
tmp_table_size = 1024M
max_heap_table_size = 1024M
max_allowed_packet = 512M
#### sort_buffer_size = 256K
#### read_buffer_size = 256K
#### read_rnd_buffer_size = 512K
#### myisam_sort_buffer_size = 8M
skip_name_resolve
memlock=0
sysdate_is_now=1
max_connections=500
thread_cache_size=512
query_cache_type = 1
query_cache_size = 512M
query_cache_limit=512K
join_buffer_size = 1M
table_open_cache = 116925
open_files_limit = 233850
table_definition_cache = 58863
table_open_cache_instances = 8
lower_case_table_names=0
With this configuration, I wanted MariaDB to use maximum, as long as it is not critical.
I wanted to review this configuration, and maybe disable query_cache part, and also adjust InnoDB values. Please give me some recommendations, and also let me know if the swap size is good enough, or maybe need to disable mysql to use swap at all.
Sorry, I don't see much that is exciting here:
Analysis of GLOBAL STATUS and VARIABLES:
Observations:
Version: 10.4.13-MariaDB-log
32 GB of RAM
Uptime = 1d 15:19:41
You are not running on Windows.
Running 64-bit version
You appear to be running entirely (or mostly) InnoDB.
The More Important Issues:
Lower to the suggested value:
table_open_cache = 10000
tmp_table_size = 200M
max_heap_table_size = 200M
query_cache_size = 0 -- the high value you have can cause mysterious slowdowns
max_connections = 200
thread_cache_size = 20
The I/O setting are pretty for HDD drive; do you have SSD?
There are a lot of SHOW commands -- more than one per second. Perhaps some monitoring tool is excessively agressive?
Why so many GRANTs?
Is this in a Galera cluster?
Details and other observations:
( Key_blocks_used * 1024 / key_buffer_size ) = 48 * 1024 / 128M = 0.04% -- Percent of key_buffer used. High-water-mark.
-- Lower key_buffer_size (now 134217728) to avoid unnecessary memory usage.
( table_open_cache ) = 116,660 -- Number of table descriptors to cache
-- Several hundred is usually good.
( Open_tables / table_open_cache ) = 4,439 / 116660 = 3.8% -- Cache usage (open tables + tmp tables)
-- Optionally lower table_open_cache (now 116660)
( innodb_buffer_pool_instances ) = 24 -- For large RAM, consider using 1-16 buffer pool instances, not allowing less than 1GB each. Also, not more than, say, twice the number of CPU cores.
-- Recommend no more than 16. (Beginning to go away in 10.5)
( innodb_lru_scan_depth * innodb_buffer_pool_instances ) = 1,024 * 24 = 24,576 -- A metric of CPU usage.
-- Lower either number.
( innodb_lru_scan_depth * innodb_page_cleaners ) = 1,024 * 4 = 4,096 -- Amount of work for page cleaners every second.
-- "InnoDB: page_cleaner: 1000ms intended loop took ..." may be fixable by lowering lru_scan_depth: Consider 1000 / innodb_page_cleaners (now 4). Also check for swapping.
( innodb_page_cleaners / innodb_buffer_pool_instances ) = 4 / 24 = 0.167 -- innodb_page_cleaners
-- Recommend setting innodb_page_cleaners (now 4) to innodb_buffer_pool_instances (now 24)
(Beginning to go away in 10.5)
( innodb_lru_scan_depth ) = 1,024
-- "InnoDB: page_cleaner: 1000ms intended loop took ..." may be fixed by lowering lru_scan_depth
( innodb_io_capacity ) = 200 -- When flushing, use this many IOPs.
-- Reads could be slugghish or spiky.
( Innodb_buffer_pool_pages_free / Innodb_buffer_pool_pages_total ) = 1,065,507 / 1538880 = 69.2% -- Pct of buffer_pool currently not in use
-- innodb_buffer_pool_size (now 25769803776) is bigger than necessary?
( innodb_io_capacity_max / innodb_io_capacity ) = 2,000 / 200 = 10 -- Capacity: max/plain
-- Recommend 2. Max should be about equal to the IOPs your I/O subsystem can handle. (If the drive type is unknown 2000/200 may be a reasonable pair.)
( Innodb_buffer_pool_bytes_data / innodb_buffer_pool_size ) = 7,641,841,664 / 24576M = 29.7% -- Percent of buffer pool taken up by data
-- A small percent may indicate that the buffer_pool is unnecessarily big.
( innodb_log_buffer_size ) = 96M -- Suggest 2MB-64MB, and at least as big as biggest blob set in transactions.
-- Adjust innodb_log_buffer_size (now 100663296).
( Uptime / 60 * innodb_log_file_size / Innodb_os_log_written ) = 141,581 / 60 * 6144M / 2470192128 = 6,154 -- Minutes between InnoDB log rotations Beginning with 5.6.8, this can be changed dynamically; be sure to also change my.cnf.
-- (The recommendation of 60 minutes between rotations is somewhat arbitrary.) Adjust innodb_log_file_size (now 6442450944). (Cannot change in AWS.)
( default_tmp_storage_engine ) = default_tmp_storage_engine =
( innodb_flush_neighbors ) = 1 -- A minor optimization when writing blocks to disk.
-- Use 0 for SSD drives; 1 for HDD.
( innodb_io_capacity ) = 200 -- I/O ops per second capable on disk . 100 for slow drives; 200 for spinning drives; 1000-2000 for SSDs; multiply by RAID factor.
( sync_binlog ) = 0 -- Use 1 for added security, at some cost of I/O =1 may lead to lots of "query end"; =0 may lead to "binlog at impossible position" and lose transactions in a crash, but is faster.
( innodb_print_all_deadlocks ) = innodb_print_all_deadlocks = OFF -- Whether to log all Deadlocks.
-- If you are plagued with Deadlocks, turn this on. Caution: If you have lots of deadlocks, this may write a lot to disk.
( min( tmp_table_size, max_heap_table_size ) ) = (min( 1024M, 1024M )) / 32768M = 3.1% -- Percent of RAM to allocate when needing MEMORY table (per table), or temp table inside a SELECT (per temp table per some SELECTs). Too high may lead to swapping.
-- Decrease tmp_table_size (now 1073741824) and max_heap_table_size (now 1073741824) to, say, 1% of ram.
( character_set_server ) = character_set_server = latin1
-- Charset problems may be helped by setting character_set_server (now latin1) to utf8mb4. That is the future default.
( local_infile ) = local_infile = ON
-- local_infile (now ON) = ON is a potential security issue
( query_cache_size ) = 512M -- Size of QC
-- Too small = not of much use. Too large = too much overhead. Recommend either 0 or no more than 50M.
( Qcache_hits / (Qcache_hits + Com_select) ) = 8,821 / (8821 + 5602645) = 0.16% -- Hit ratio -- SELECTs that used QC
-- Consider turning off the query cache.
( (query_cache_size - Qcache_free_memory) / Qcache_queries_in_cache / query_alloc_block_size ) = (512M - 48787272) / 224183 / 16384 = 0.133 -- query_alloc_block_size vs formula
-- Adjust query_alloc_block_size (now 16384)
( tmp_table_size ) = 1024M -- Limit on size of MEMORY temp tables used to support a SELECT
-- Decrease tmp_table_size (now 1073741824) to avoid running out of RAM. Perhaps no more than 64M.
( Com_admin_commands / Queries ) = 888,691 / 6680823 = 13.3% -- Percent of queries that are "admin" commands.
-- What's going on?
( Slow_queries / Questions ) = 438,188 / 6557866 = 6.7% -- Frequency (% of all queries)
-- Find slow queries; check indexes.
( log_queries_not_using_indexes ) = log_queries_not_using_indexes = ON -- Whether to include such in slowlog.
-- This clutters the slowlog; turn it off so you can see the real slow queries. And decrease long_query_time (now 2) to catch most interesting queries.
( Uptime_since_flush_status ) = 451 = 7m 31s -- How long (in seconds) since FLUSH STATUS (or server startup).
-- GLOBAL STATUS has not been gathered long enough to get reliable suggestions for many of the issues. Fix what you can, then come back in a several hours.
( Max_used_connections / max_connections ) = 25 / 500 = 5.0% -- Peak % of connections
-- Since several memory factors can expand based on max_connections (now 500), it is good not to have that setting too high.
( thread_cache_size / Max_used_connections ) = 500 / 25 = 2000.0%
-- There is no advantage in having the thread cache bigger than your likely number of connections. Wasting space is the disadvantage.
Abnormally small:
Innodb_dblwr_pages_written / Innodb_dblwr_writes = 2.28
aria_checkpoint_log_activity = 1.05e+6
aria_pagecache_buffer_size = 128MB
innodb_buffer_pool_chunk_size = 128MB
innodb_max_undo_log_size = 10MB
innodb_online_alter_log_max_size = 128MB
innodb_sort_buffer_size = 1.05e+6
innodb_spin_wait_delay = 4
lock_wait_timeout = 86,400
performance_schema_max_mutex_classes = 0
query_cache_limit = 524,288
Abnormally large:
Acl_column_grants = 216
Acl_database_grants = 385
Acl_table_grants = 1,877
Innodb_buffer_pool_pages_free = 1.07e+6
Innodb_num_open_files = 9,073
Memory_used_initial = 8.16e+8
Open_table_definitions = 4,278
Open_tables = 4,439
Performance_schema_file_instances_lost = 1,732
Performance_schema_mutex_classes_lost = 190
Performance_schema_table_handles_lost = 570
Qcache_free_blocks = 9,122
Qcache_total_blocks = 457,808
Tc_log_page_size = 4,096
Uptime - Uptime_since_flush_status = 141,130
aria_sort_buffer_size = 256.0MB
auto_increment_offset = 3
gtid_domain_id = 12,000
innodb_open_files = 116,660
max_heap_table_size = 1024MB
max_relay_log_size = 1024MB
min(max_heap_table_size, tmp_table_size) = 1024MB
performance_schema_events_stages_history_size = 20
performance_schema_events_statements_history_size = 20
performance_schema_events_waits_history_size = 20
performance_schema_max_cond_classes = 90
table_definition_cache = 58,863
table_open_cache / max_connections = 233
tmp_memory_table_size = 1024MB
wsrep_cluster_size = 3
wsrep_gtid_domain_id = 12,000
wsrep_local_bf_aborts = 107
wsrep_slave_threads = 32
wsrep_thread_count = 33
Abnormal strings:
aria_recover_options = BACKUP,QUICK
disconnect_on_expired_password = OFF
gtid_ignore_duplicates = ON
gtid_strict_mode = ON
histogram_type = DOUBLE_PREC_HB
innodb_fast_shutdown = 1
myisam_stats_method = NULLS_UNEQUAL
old_alter_table = DEFAULT
opt_s__optimize_join_buffer_size = on
optimizer_trace = enabled=off
use_stat_tables = PREFERABLY_FOR_QUERIES
wsrep_cluster_status = Primary
wsrep_connected = ON
wsrep_debug = NONE
wsrep_gtid_mode = ON
wsrep_load_data_splitting = OFF
wsrep_provider = /usr/lib64/galera-4/libgalera_smm.so
wsrep_provider_name = Galera
wsrep_provider_options = base_dir = /var/lib/mysql/; base_host = FIRST_NODE_IP; base_port = 4567; cert.log_conflicts = no; cert.optimistic_pa = yes; debug = no; evs.auto_evict = 0; evs.causal_keepalive_period = PT1S; evs.debug_log_mask = 0x1; evs.delay_margin = PT1S; evs.delayed_keep_period = PT30S; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.info_log_mask = 0; evs.install_timeout = PT7.5S; evs.join_retrans_period = PT1S; evs.keepalive_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 4; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.use_aggregate = true; evs.user_send_window = 2; evs.version = 1; evs.view_forget_timeout = P1D; gcache.dir = /var/lib/mysql/; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = galera.cache; gcache.page_size = 128M; gcache.recover = yes; gcache.size = 1024M; gcomm.thread_prio = ; gcs.fc_debug = 0; gcs.fc_factor = 1.0; gcs.fc_limit = 16; gcs.fc_master_slave = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.listen_addr = tcp://0.0.0.0:4567; gmcast.mcast_addr = ; gmcast.mcast_ttl = 1; gmcast.peer_timeout = PT3S; gmcast.segment = 0; gmcast.time_wait = PT5S; gmcast.version = 0; ist.recv_addr = FIRST_NODE_IP; pc.announce_timeout = PT3S; pc.checksum = false; pc.ignore_quorum = false; pc.ignore_sb = false; pc.linger = PT20S; pc.npvo = false; pc.recovery = true; pc.version = 0; pc.wait_prim = true; pc.wait_prim_timeout = PT30S; pc.weight = 1; protonet.backend = asio; protonet.version = 0; repl.causal_read_timeout = PT30S; repl.commit_order = 3; repl.key_format = FLAT8; repl.max_ws_size = 2147483647; repl.proto_max = 10; socket.checksum = 2; socket.recv_buf_size = auto; socket.send_buf_size = auto;
wsrep_provider_vendor = Codership Oy
wsrep_provider_version = 26.4.4(r4599)
wsrep_replicate_myisam = ON
wsrep_sst_auth = ********
wsrep_sst_method = mariabackup
wsrep_start_position = 353e0616-cb37-11ea-b614-be241cab877e:39442474
None of these is necessarily too big, but there may be things going on that conspire to make them too big, especially when combined:
innodb_buffer_pool_size=24000M -- quick fix: lower this
(otherwise it should be a good size)
tmp_table_size = 1024M -- lower to 1% of RAM
max_heap_table_size = 1024M -- ditto
max_allowed_packet = 512M -- possibly too big
max_connections=500 -- lower to Max_used_connections or 100
query_cache_type = 1 -- 0 -- QC is not allowed on Galera
query_cache_size = 512M -- 0 -- ditto
table_open_cache = 116925 -- see how 2000 works
table_definition_cache = 58863 -- ditto
For further analysis, provide GLOBAL STATUS and VARIABLES a discussed here: http://mysql.rjweb.org/doc.php/mysql_analysis#tuning
MySQL version = 5.7.31
We started noticing high CPU utilization in our DB server after 2.5 hours of heavy work load (roughly 800 selects per second). DB was performing quite well, and all of a sudden InnoDB Disk Writes increase significantly, followed by InnoDB Disk Reads. Select count drops to zero at this point making the application useless.
After about 15 mins the DB starts working normally.
configuration as follows
innodb_flush_method=O_DIRECT
innodb_log_file_size=1G
innodb_numa_interleave
innodb_buffer_pool_size=75G
key_buffer_size = 12G
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
tmp_table_size = 1024M
max_heap_table_size = 1024M
max_connections = 600
max_connect_errors = 10000
query_cache_limit = 1M
query_cache_size = 50M
htop: https://ibb.co/gwGSkc1 - (Before the issue)
iostat: https://ibb.co/YyJWkb9 - (Before the issue)
df -h : https://ibb.co/x25vg52
RAM 94G
CORE COUNT 32
SSD : /var/lib/mysql is mounted on a SSD Volume (Solution is hosted on open stack)
GLOBAL STATUS : https://pastebin.com/yC4FUYiE
GLOBAL Variables : https://pastebin.com/PfsYTRbm
PROCESS LIST : https://pastebin.com/TyA5KBDb
Rate Per Second = RPS
Suggestions to consider for your my.cnf [mysqld] section
innodb_lru_scan_depth=100 # from 1024 to conserve 90% of CPU cycles used for function
innodb_io_capacity=1500 # from 200 to use more of your available SSD IOPS
read_rnd_buffer_size=128K # from 256K to reduce handler_read_rnd_next RPS of 474,918
key_buffer_size=16M # from 12G less than 1% used and your max is 94G available.
For additional suggestions view profile, Network profile for contact info and free downloadable Utility Scripts to assist with performance tuning.
There are many more opportunities to improve your configuration.
Not much exciting in the settings:
Analysis of GLOBAL STATUS and VARIABLES:
Observations:
Version: 5.7.31
94 GB of RAM
Uptime = 17:36:15; some GLOBAL STATUS values may not be meaningful yet.
You are not running on Windows.
Running 64-bit version
You appear to be running entirely (or mostly) InnoDB.
The More Important Issues:
MyISAM is not used, so key_buffer_size = 12G is a waste of RAM. Change to 50M.
If you have SSD drives, increase innodb_io_capacity from 200 to 1000.
Several metrics point to inefficient queries. They may need better indexes or rewriting. See http://mysql.rjweb.org/doc.php/mysql_analysis#slow_queries_and_slowlog
Details and other observations:
( key_buffer_size ) = 12,288M / 96256M = 12.8% -- % of RAM used for key_buffer (for MyISAM indexes)
-- 20% is ok if you are not using InnoDB.
( (key_buffer_size - 1.2 * Key_blocks_used * 1024) ) = ((12288M - 1.2 * 9 * 1024)) / 96256M = 12.8% -- Percent of RAM wasted in key_buffer.
-- Decrease key_buffer_size (now 12884901888).
( Key_blocks_used * 1024 / key_buffer_size ) = 9 * 1024 / 12288M = 0.00% -- Percent of key_buffer used. High-water-mark.
-- Lower key_buffer_size (now 12884901888) to avoid unnecessary memory usage.
( (key_buffer_size / 0.20 + innodb_buffer_pool_size / 0.70) ) = ((12288M / 0.20 + 76800M / 0.70)) / 96256M = 177.8% -- Most of available ram should be made available for caching.
-- http://mysql.rjweb.org/doc.php/memory
( innodb_lru_scan_depth * innodb_page_cleaners ) = 1,024 * 4 = 4,096 -- Amount of work for page cleaners every second.
-- "InnoDB: page_cleaner: 1000ms intended loop took ..." may be fixable by lowering lru_scan_depth: Consider 1000 / innodb_page_cleaners (now 4). Also check for swapping.
( innodb_page_cleaners / innodb_buffer_pool_instances ) = 4 / 8 = 0.5 -- innodb_page_cleaners
-- Recommend setting innodb_page_cleaners (now 4) to innodb_buffer_pool_instances (now 8)
(Beginning to go away in 10.5)
( innodb_lru_scan_depth ) = 1,024
-- "InnoDB: page_cleaner: 1000ms intended loop took ..." may be fixed by lowering lru_scan_depth
( Innodb_buffer_pool_pages_free / Innodb_buffer_pool_pages_total ) = 1,579,794 / 4914600 = 32.1% -- Pct of buffer_pool currently not in use
-- innodb_buffer_pool_size (now 80530636800) is bigger than necessary?
( innodb_io_capacity_max / innodb_io_capacity ) = 2,000 / 200 = 10 -- Capacity: max/plain
-- Recommend 2. Max should be about equal to the IOPs your I/O subsystem can handle. (If the drive type is unknown 2000/200 may be a reasonable pair.)
( Innodb_os_log_written / (Uptime / 3600) / innodb_log_files_in_group / innodb_log_file_size ) = 138,870,272 / (63375 / 3600) / 2 / 1024M = 0.00367 -- Ratio
-- (see minutes)
( Uptime / 60 * innodb_log_file_size / Innodb_os_log_written ) = 63,375 / 60 * 1024M / 138870272 = 8,166 -- Minutes between InnoDB log rotations Beginning with 5.6.8, this can be changed dynamically; be sure to also change my.cnf.
-- (The recommendation of 60 minutes between rotations is somewhat arbitrary.) Adjust innodb_log_file_size (now 1073741824). (Cannot change in AWS.)
( innodb_flush_method ) = innodb_flush_method = O_DSYNC -- How InnoDB should ask the OS to write blocks. Suggest O_DIRECT or O_ALL_DIRECT (Percona) to avoid double buffering. (At least for Unix.) See chrischandler for caveat about O_ALL_DIRECT
( innodb_flush_neighbors ) = 1 -- A minor optimization when writing blocks to disk.
-- Use 0 for SSD drives; 1 for HDD.
( innodb_io_capacity ) = 200 -- I/O ops per second capable on disk . 100 for slow drives; 200 for spinning drives; 1000-2000 for SSDs; multiply by RAID factor.
( innodb_print_all_deadlocks ) = innodb_print_all_deadlocks = OFF -- Whether to log all Deadlocks.
-- If you are plagued with Deadlocks, turn this on. Caution: If you have lots of deadlocks, this may write a lot to disk.
( min( tmp_table_size, max_heap_table_size ) ) = (min( 1024M, 1024M )) / 96256M = 1.1% -- Percent of RAM to allocate when needing MEMORY table (per table), or temp table inside a SELECT (per temp table per some SELECTs). Too high may lead to swapping.
-- Decrease tmp_table_size (now 1073741824) and max_heap_table_size (now 1073741824) to, say, 1% of ram.
( character_set_server ) = character_set_server = latin1
-- Charset problems may be helped by setting character_set_server (now latin1) to utf8mb4. That is the future default.
( local_infile ) = local_infile = ON
-- local_infile (now ON) = ON is a potential security issue
( Created_tmp_disk_tables / Created_tmp_tables ) = 59,659 / 68013 = 87.7% -- Percent of temp tables that spilled to disk
-- Check slowlog
( tmp_table_size ) = 1024M -- Limit on size of MEMORY temp tables used to support a SELECT
-- Decrease tmp_table_size (now 1073741824) to avoid running out of RAM. Perhaps no more than 64M.
( (Com_insert + Com_update + Com_delete + Com_replace) / Com_commit ) = (53844 + 35751 + 1 + 0) / 35789 = 2.5 -- Statements per Commit (assuming all InnoDB)
-- Low: Might help to group queries together in transactions; High: long transactions strain various things.
( Select_range_check ) = 70,106 / 63375 = 1.1 /sec -- no good index
-- Find slow queries; check indexes.
( Select_scan ) = 2,393,389 / 63375 = 38 /sec -- full table scans
-- Add indexes / optimize queries (unless they are tiny tables)
( Select_scan / Com_select ) = 2,393,389 / 10449190 = 22.9% -- % of selects doing full table scan. (May be fooled by Stored Routines.)
-- Add indexes / optimize queries
( Sort_merge_passes ) = 18,868 / 63375 = 0.3 /sec -- Heafty sorts
-- Increase sort_buffer_size (now 262144) and/or optimize complex queries.
( slow_query_log ) = slow_query_log = OFF -- Whether to log slow queries. (5.1.12)
( long_query_time ) = 10 -- Cutoff (Seconds) for defining a "slow" query.
-- Suggest 2
( log_slow_slave_statements ) = log_slow_slave_statements = OFF -- (5.6.11, 5.7.1) By default, replicated statements won't show up in the slowlog; this causes them to show.
-- It can be helpful in the slowlog to see writes that could be interfering with Replica reads.
( Aborted_connects / Connections ) = 1,057 / 2070 = 51.1% -- Perhaps a hacker is trying to break in? (Attempts to connect)
( max_connect_errors ) = 10,000 -- A small protection against hackers.
-- Perhaps no more than 200.
You have the Query Cache half-off. You should set both query_cache_type = OFF and query_cache_size = 0 . There is (according to a rumor) a 'bug' in the QC code that leaves some code on unless you turn off both of those settings.
Abnormally small:
Innodb_os_log_fsyncs = 0
innodb_buffer_pool_chunk_size = 128MB
innodb_online_alter_log_max_size = 128MB
innodb_sort_buffer_size = 1.05e+6
Abnormally large:
(Com_select + Qcache_hits) / (Com_insert + Com_update + Com_delete + Com_replace) = 116
Com_create_procedure = 0.11 /HR
Com_drop_procedure = 0.11 /HR
Com_show_charsets = 0.68 /HR
Com_show_plugins = 0.11 /HR
Created_tmp_files = 0.6 /sec
Innodb_buffer_pool_bytes_data = 838452 /sec
Innodb_buffer_pool_pages_data = 3.24e+6
Innodb_buffer_pool_pages_free = 1.58e+6
Innodb_buffer_pool_pages_total = 4.91e+6
Key_blocks_unused = 1.02e+7
Ssl_default_timeout = 7,200
Ssl_session_cache_misses = 10
Ssl_verify_depth = 1.84e+19
Ssl_verify_mode = 5
max_heap_table_size = 1024MB
min(max_heap_table_size, tmp_table_size) = 1024MB
Abnormal strings:
ft_boolean_syntax = + -><()~*:&
innodb_fast_shutdown = 1
innodb_numa_interleave = ON
optimizer_trace = enabled=off,one_line=off
optimizer_trace_features = greedy_search=on, range_optimizer=on, dynamic_range=on, repeated_subselect=on
slave_rows_search_algorithms = TABLE_SCAN,INDEX_SCAN
I'm looking for some help tuning our my.cnf file to handle many current users (about 20 orders/minute). This particular site is in WordPress and running Woocommerce.
After a lot of reading online, I've come up with the settings below. The server is Debian 8 with 12 CPUs and 48GB RAM.
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
key_buffer = 2G
max_allowed_packet = 512M
thread_cache_size = 256K
tmp_table_size = 4G
max_heap_table_size = 4G
table_cache = 1024
table_definition_cache = 1024
myisam_recover = FORCE,BACKUP
max_connections = 300
wait_timeout = 120
connect_timeout = 120
interactive_timeout = 120
innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = 1
innodb_buffer_pool_size = 2G
innodb_io_capacity = 1000
innodb_read_io_threads = 32
innodb_thread_concurrency = 0
innodb_write_io_threads = 32
It seems to be running pretty good for now. Any additional thoughts? Thanks for your input!
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
See https://dom.as/tech/query-cache-tuner/
key_buffer = 2G
Key buffer is only for MyISAM. You shouldn't use MyISAM.
max_allowed_packet = 512M
thread_cache_size = 256K
tmp_table_size = 4G
max_heap_table_size = 4G
4G is probably way too high for the tmp table and heap table sizes. Keep in mind multiple threads can create temp tables concurrently.
table_cache = 1024
table_definition_cache = 1024
Probably overkill.
myisam_recover = FORCE,BACKUP
Also used only for MyISAM.
max_connections = 300
What does show global status like 'max_used_connections' say? Is it close to max_connections?
wait_timeout = 120
connect_timeout = 120
interactive_timeout = 120
innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = 1
innodb_buffer_pool_size = 2G
Fine but with 48G of RAM, you can probably increase the buffer pool size.
Run show engine innodb status and look for these lines:
Buffer pool size 131072
Free buffers 0
Database pages 128000
Is your buffer pool always pegged full? If so, increase its size.
What's the total size of your database? You don't need the buffer pool to be larger than the total size of your data+indexes, but large enough to hold the frequently-accessed pages would be good.
select round(sum(data_length+index_length)/1024/1024, 2) as mb
from information_schema.tables where engine='InnoDB'
innodb_io_capacity = 1000
innodb_read_io_threads = 32
innodb_thread_concurrency = 0
innodb_write_io_threads = 32
The io capacity might be greater than the ability of your disks to keep up. You didn't describe what your disk system is.
The io threads is way overkill for a Wordpress site with the traffic you describe. Run show engine innodb status and look for this line:
Pending normal aio reads: [0, 0, 0, 0] , aio writes: [0, 0, 0, 0] ,
If you always see 0's on that line, you don't need more than the default 4 & 4 io threads.
I have a server with 2 CPU cores and 1GB of RAM.The server only run one wordpress site.My Server Stack is LEMP.I ran mysql tuner two weeks after setting up the wordpress site.
Here are the results
[!!] Maximum reached memory usage: 884.8M (89.15% of installed RAM)
[!!] Maximum possible memory usage: 1.4G (139.86% of installed RAM)
[!!] Overall possible memory usage with other process exceeded memory
[!!] Slow queries: 15% (629K/4M)
[OK] Highest usage of available connections: 9% (19/200)
[OK] Aborted connections: 0.75% (4103/548857)
[!!] name resolution is active : a reverse name resolution is made for each new connection and can reduce performance
Here is my my.cnf configuration
[mysql]
# CLIENT #
port = 3306
socket = /var/lib/mysql/mysql.sock
[mysqld]
# GENERAL #
user = mysql
default-storage-engine = InnoDB
socket = /var/lib/mysql/mysql.sock
pid-file = /var/lib/mysql/mysql.pid
# MyISAM #
key-buffer-size = 32M
myisam-recover = FORCE,BACKUP
# SAFETY #
max-allowed-packet = 16M
max-connect-errors = 1000000
# DATA STORAGE #
datadir = /var/lib/mysql/
# BINARY LOGGING #
log-bin = /var/lib/mysql/mysql-bin
expire-logs-days = 14
sync-binlog = 1
# CACHES AND LIMITS #
tmp-table-size = 32M
max-heap-table-size = 32M
query-cache-type = 0
query-cache-size = 0
max-connections = 200
thread-cache-size = 20
open-files-limit = 65535
table-definition-cache = 1024
table-open-cache = 2048
# INNODB #
innodb-flush-method = O_DIRECT
innodb-log-files-in-group = 2
innodb-log-file-size = 64M
innodb-flush-log-at-trx-commit = 1
innodb-file-per-table = 1
innodb-buffer-pool-size = 624M
# LOGGING #
log-error = /var/lib/mysql/mysql-error.log
log-queries-not-using-indexes = 1
slow-query-log = 1
slow-query-log-file = /var/lib/mysql/mysql-slow.log
How can i optimize the configuation to fix those issues
There is one terribly bad setting:
innodb-buffer-pool-size = 624M
in a tiny 1GB server that probably includes both WP and MySQL? Change that to 200M. And watch for swapping. If there is any swapping, lower it more. Swapping leads to a huge amount of I/O; it is better to shrink the settings instead. Here's a head start:
tmp-table-size = 32M -> 8M
max-heap-table-size = 32M -> 8M
query-cache-type = 0 -- good
query-cache-size = 0 -- good
max-connections = 200 -> 50
thread-cache-size = 20
open-files-limit = 65535
table-definition-cache = 1024 -> 200
table-open-cache = 2048 -> 300
You have the slow log turned on? Let's see the worst query, as indicated by mysqldumpslow -s t or pt-query-digest.
Here's another tip. This vital table currently has lousy indexes; these will help:
CREATE TABLE wp_postmeta (
post_id …,
meta_key …,
meta_value …,
PRIMARY KEY(post_id, meta_key),
INDEX(meta_key)
) ENGINE=InnoDB;
IS WORDPRESS LISTENING?
Here's why:
AUTO_INCREMENT was a waste
This is a much better PK
Use 191 if necessary (5.6.3 thru 5.7.6)
InnoDB for clustered PK
More details: http://mysql.rjweb.org/doc.php/index_cookbook_mysql#speeding_up_wp_postmeta
Mysqld memory consumption rises forever and never seems to be freed. It starts out about 6GB but gradually rises to around 10GB in a few weeks, and of the 10GB only 4GB and 50MB is being used by the buffer pool and the dictionary respectively when checking innodb status.
This is with MySQL 5.6.16 on a server with 12GB memory. A few of the tables are partioned and there are roughly 8000 idb files. Also one table is created each day.
I have tried to FLUSH TABLES with no success. The tables are closed but the memory does not get freed at all. In fact, more memory gets consumed.
Why is the memory being consumed?
And are there any known issues with memory not being freed when using partioned tables?
my.cnf
query_cache_size = 512M
query_cache_limit = 16M
max_allowed_packet = 16M
table_open_cache = 1024
sort_buffer_size = 2M
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 2MB
myisam_sort_buffer_size = 1M
max_connections = 1024
thread_cache = 1024
tmp_table_size = 16M
max_heap_table_size = 16M
wait_timeout = 20
join_buffer_size = 256KB
thread_cache_size = 50
table_definition_cache = 400
key_buffer_size = 256M
First, you should know that once MySQL uses memory, it never frees it. It keeps it around for the next query.
Here's my query for figuring out the maximum amount of memory my InnoDB DB will use:
SELECT ( ##key_buffer_size
+ ##query_cache_size
+ ##innodb_buffer_pool_size
+ ##innodb_log_buffer_size
+ ##max_allowed_packet
+ ##max_connections * ( ##read_buffer_size
+ ##read_rnd_buffer_size
+ ##sort_buffer_size
+ ##join_buffer_size
+ ##binlog_cache_size
+ 2*##net_buffer_length
+ ##thread_stack
+ ##tmp_table_size )
) / (1024 * 1024 * 1024) AS MAX_MEMORY_GB;
To see a breakdown of how much memory is being used for different things:
-- To see a breakdown of what's using how much memory:
SELECT ##key_buffer_size/(1024*1024) as `key_buffer_size_IN_MB`,
##query_cache_size/(1024*1024) as 'query_cache_size_IN_MB',
##innodb_buffer_pool_size/(1024*1024) as 'innodb_buffer_pool_size_IN_MB',
##innodb_log_buffer_size/(1024*1024) as 'innodb_log_buffer_size_IN_MB',
##max_connections*##read_buffer_size/(1024*1024) as '##read_buffer_size_IN_MB',
##max_connections*##read_rnd_buffer_size/(1024*1024) as 'read_rnd_buffer_size_IN_MB',
##max_connections*##sort_buffer_size/(1024*1024) as 'sort_buffer_size_IN_MB',
##max_connections*##join_buffer_size/(1024*1024) as 'join_buffer_size_IN_MB',
##max_connections*##binlog_cache_size/(1024*1024) as 'binlog_cache_size_IN_MB',
##max_connections*##thread_stack/(1024*1024) as 'thread_stack_IN_MB',
##max_connections*##tmp_table_size/(1024*1024) as 'tmp_table_size_IN_MB',
##max_connections*##net_buffer_length*2/(1024*1024) as 'net_buffer_size_IN_MB'
;