InnoDB insert count slow down - mysql

I batch insert to mysql innoDB table continuously, insert per second ratio is slow down.
Some behaviour are
- If shutdown data inserter (java) application, mysql do some i/o operation for a while.
- Add some insert then shutdown mysql server, shutdown operation duration is too long. If start and stop mysql without any insertion, start and stop operation so fast.
- Insert speed is not (so much) depend on data amount on table. If restart mysql server, insert per second is similar to last restart insert per second value.
I read some comment on forum, do not add continuosly, have gap between 2 insertions. Is it meaningful? Why sql is slow down?
query SHOW VARIABLES LIKE 'inno%' result is below
innodb_adaptive_flushing = ON
innodb_adaptive_hash_index = ON
innodb_additional_mem_pool_size = 20971520
innodb_autoextend_increment = 8
innodb_autoinc_lock_mode = 1
innodb_buffer_pool_instances = 1
innodb_buffer_pool_size = 268435456
innodb_change_buffering = all
innodb_checksums = ON
innodb_commit_concurrency = 0
innodb_concurrency_tickets = 500
innodb_data_file_path = ibdata1:50M:autoextend
innodb_data_home_dir =
innodb_doublewrite = ON
innodb_fast_shutdown = 1
innodb_file_format = Barracuda
innodb_file_format_check = ON
innodb_file_format_max = Antelope
innodb_file_per_table = ON
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DSYNC
innodb_force_recovery = 0
innodb_io_capacity = 200
innodb_lock_wait_timeout = 50
innodb_locks_unsafe_for_binlog = OFF
innodb_log_buffer_size = 8388608
innodb_log_file_size = 268435456
innodb_log_files_in_group = 2
innodb_log_group_home_dir = ./
innodb_max_dirty_pages_pct = 75
innodb_max_purge_lag = 0
innodb_mirrored_log_groups = 1
innodb_old_blocks_pct = 37
innodb_old_blocks_time = 0
innodb_open_files = 300
innodb_purge_batch_size = 20
innodb_purge_threads = 0
innodb_read_ahead_threshold = 56
innodb_read_io_threads = 4
innodb_replication_delay = 0
innodb_rollback_on_timeout = OFF
innodb_spin_wait_delay = 6
innodb_stats_on_metadata = ON
innodb_stats_sample_pages = 8
innodb_strict_mode = ON
innodb_support_xa = ON
innodb_sync_spin_loops = 30
innodb_table_locks = ON
innodb_thread_concurrency = 0
innodb_thread_sleep_delay = 10000
innodb_use_native_aio = OFF
innodb_use_sys_malloc = ON
innodb_version = 1.1.1
innodb_write_io_threads = 4
Thanks

InnoDB works by default in autocommit mode, which means every insert requires writing to disk twice. Using extended inserts (a.k.a. multi-row inserts) and enclosing several consecutive inserts into a transaction increases performance.

Slow down reason is insert operation stored on to cache (dirty page) and periodically written hard-drive. Until dirty page memory is full insert operations are fast then cache is full and insert speed bottleneck disc written (I/O).
You can use below sql to show dirty page size "Modified db pages"
show engine innodb status

Related

Mysql cpu usage is too high

My server config is this:
1- (Intel(R) Xeon(R) CPU E5-2673 v3 # 2.40GHz) x2
2- (Ram DDR4 8G) x16 128G total ram
And this is mysql configuration:
[mysqld]
# log-error=/var/lib/mysql/mysqld.err
# pid-file=/var/run/mysqld/mysqld.pid
innodb_undo_log_truncate = off
# general
table_open_cache = 200000
table_open_cache_instances = 64
back_log = 3500
max_connections = 100000
# files
innodb_file_per_table = ON # New
innodb_log_file_size = 16G # c
innodb_log_files_in_group = 2
innodb_open_files = 4000
# buffers
innodb_buffer_pool_size = 64G # c
innodb_buffer_pool_instances = 24
innodb_log_buffer_size = 64M
key_buffer_size = 64M
# tune
innodb_doublewrite = 1
innodb_thread_concurrency = 0
innodb_flush_log_at_trx_commit = 0
innodb_flush_method = O_DIRECT_NO_FSYNC
innodb_max_dirty_pages_pct = 90
innodb_max_dirty_pages_pct_lwm = 10
# 1innodb_lru_scan_depth = 2048
innodb_page_cleaners = 4
join_buffer_size = 512KB
sort_buffer_size = 512KB
innodb_use_native_aio = 1
#innodb_spin_wait_delay = 96
innodb_adaptive_flushing = 1
innodb_flush_neighbors = 0
innodb_read_io_threads = 16
innodb_write_io_threads = 16
innodb_io_capacity = 1500
innodb_io_capacity_max = 2500
innodb_purge_threads = 4
innodb_adaptive_hash_index = 0
max_prepared_stmt_count = 1000000
innodb_monitor_enable = '%'
performance_schema = ON
max_allowed_packet = 268435456
thread_handling = pool-of-threads
My website use huge data in join and sort and always at least 200 visitors online.
Response is too low speed and cpu usage more than 200%
mysql 0 CPU:220.53% Memory:22.08% /usr/sbin/mysqld
I think my Mysql configuration is wrong.
What should i do?

Lack of swap memory on Mariadb 10.4 Galera cluster

I have 3 nodes Galera cluster with MariaDB 10.4.13. Each node have 32GB RAM, and 2GB Swap. After my mysql tuning about 1 month ago each node memory almost full, but I think it is ok. But the last few days Swap size reached maximum and does not go down. My my.cnf looks like this:
####Slow logging
slow_query_log_file=/var/lib/mysql/mysql-slow.log
long_query_time=2
slow_query_log=ON
log_queries_not_using_indexes=ON
############ INNODB OPTIONS
innodb_buffer_pool_size=24000M
innodb_flush_log_at_trx_commit=2
innodb_file_per_table=1
innodb_data_file_path=ibdata1:100M:autoextend
innodb_read_io_threads=4
innodb_write_io_threads=4
innodb_doublewrite=1
innodb_log_file_size=6144M
innodb_log_buffer_size=96M
innodb_buffer_pool_instances=24
innodb_log_files_in_group=2
innodb_thread_concurrency=0
#### innodb_file_format = barracuda
innodb_flush_method = O_DIRECT
#### innodb_locks_unsafe_for_binlog = 1
innodb_autoinc_lock_mode=2
######## avoid statistics update when doing e.g show tables
innodb_stats_on_metadata=0
default_storage_engine=innodb
innodb_strict_mode = 0
#### OTHER THINGS, BUFFERS ETC
#### key_buffer_size = 24M
tmp_table_size = 1024M
max_heap_table_size = 1024M
max_allowed_packet = 512M
#### sort_buffer_size = 256K
#### read_buffer_size = 256K
#### read_rnd_buffer_size = 512K
#### myisam_sort_buffer_size = 8M
skip_name_resolve
memlock=0
sysdate_is_now=1
max_connections=500
thread_cache_size=512
query_cache_type = 1
query_cache_size = 512M
query_cache_limit=512K
join_buffer_size = 1M
table_open_cache = 116925
open_files_limit = 233850
table_definition_cache = 58863
table_open_cache_instances = 8
lower_case_table_names=0
With this configuration, I wanted MariaDB to use maximum, as long as it is not critical.
I wanted to review this configuration, and maybe disable query_cache part, and also adjust InnoDB values. Please give me some recommendations, and also let me know if the swap size is good enough, or maybe need to disable mysql to use swap at all.
Sorry, I don't see much that is exciting here:
Analysis of GLOBAL STATUS and VARIABLES:
Observations:
Version: 10.4.13-MariaDB-log
32 GB of RAM
Uptime = 1d 15:19:41
You are not running on Windows.
Running 64-bit version
You appear to be running entirely (or mostly) InnoDB.
The More Important Issues:
Lower to the suggested value:
table_open_cache = 10000
tmp_table_size = 200M
max_heap_table_size = 200M
query_cache_size = 0 -- the high value you have can cause mysterious slowdowns
max_connections = 200
thread_cache_size = 20
The I/O setting are pretty for HDD drive; do you have SSD?
There are a lot of SHOW commands -- more than one per second. Perhaps some monitoring tool is excessively agressive?
Why so many GRANTs?
Is this in a Galera cluster?
Details and other observations:
( Key_blocks_used * 1024 / key_buffer_size ) = 48 * 1024 / 128M = 0.04% -- Percent of key_buffer used. High-water-mark.
-- Lower key_buffer_size (now 134217728) to avoid unnecessary memory usage.
( table_open_cache ) = 116,660 -- Number of table descriptors to cache
-- Several hundred is usually good.
( Open_tables / table_open_cache ) = 4,439 / 116660 = 3.8% -- Cache usage (open tables + tmp tables)
-- Optionally lower table_open_cache (now 116660)
( innodb_buffer_pool_instances ) = 24 -- For large RAM, consider using 1-16 buffer pool instances, not allowing less than 1GB each. Also, not more than, say, twice the number of CPU cores.
-- Recommend no more than 16. (Beginning to go away in 10.5)
( innodb_lru_scan_depth * innodb_buffer_pool_instances ) = 1,024 * 24 = 24,576 -- A metric of CPU usage.
-- Lower either number.
( innodb_lru_scan_depth * innodb_page_cleaners ) = 1,024 * 4 = 4,096 -- Amount of work for page cleaners every second.
-- "InnoDB: page_cleaner: 1000ms intended loop took ..." may be fixable by lowering lru_scan_depth: Consider 1000 / innodb_page_cleaners (now 4). Also check for swapping.
( innodb_page_cleaners / innodb_buffer_pool_instances ) = 4 / 24 = 0.167 -- innodb_page_cleaners
-- Recommend setting innodb_page_cleaners (now 4) to innodb_buffer_pool_instances (now 24)
(Beginning to go away in 10.5)
( innodb_lru_scan_depth ) = 1,024
-- "InnoDB: page_cleaner: 1000ms intended loop took ..." may be fixed by lowering lru_scan_depth
( innodb_io_capacity ) = 200 -- When flushing, use this many IOPs.
-- Reads could be slugghish or spiky.
( Innodb_buffer_pool_pages_free / Innodb_buffer_pool_pages_total ) = 1,065,507 / 1538880 = 69.2% -- Pct of buffer_pool currently not in use
-- innodb_buffer_pool_size (now 25769803776) is bigger than necessary?
( innodb_io_capacity_max / innodb_io_capacity ) = 2,000 / 200 = 10 -- Capacity: max/plain
-- Recommend 2. Max should be about equal to the IOPs your I/O subsystem can handle. (If the drive type is unknown 2000/200 may be a reasonable pair.)
( Innodb_buffer_pool_bytes_data / innodb_buffer_pool_size ) = 7,641,841,664 / 24576M = 29.7% -- Percent of buffer pool taken up by data
-- A small percent may indicate that the buffer_pool is unnecessarily big.
( innodb_log_buffer_size ) = 96M -- Suggest 2MB-64MB, and at least as big as biggest blob set in transactions.
-- Adjust innodb_log_buffer_size (now 100663296).
( Uptime / 60 * innodb_log_file_size / Innodb_os_log_written ) = 141,581 / 60 * 6144M / 2470192128 = 6,154 -- Minutes between InnoDB log rotations Beginning with 5.6.8, this can be changed dynamically; be sure to also change my.cnf.
-- (The recommendation of 60 minutes between rotations is somewhat arbitrary.) Adjust innodb_log_file_size (now 6442450944). (Cannot change in AWS.)
( default_tmp_storage_engine ) = default_tmp_storage_engine =
( innodb_flush_neighbors ) = 1 -- A minor optimization when writing blocks to disk.
-- Use 0 for SSD drives; 1 for HDD.
( innodb_io_capacity ) = 200 -- I/O ops per second capable on disk . 100 for slow drives; 200 for spinning drives; 1000-2000 for SSDs; multiply by RAID factor.
( sync_binlog ) = 0 -- Use 1 for added security, at some cost of I/O =1 may lead to lots of "query end"; =0 may lead to "binlog at impossible position" and lose transactions in a crash, but is faster.
( innodb_print_all_deadlocks ) = innodb_print_all_deadlocks = OFF -- Whether to log all Deadlocks.
-- If you are plagued with Deadlocks, turn this on. Caution: If you have lots of deadlocks, this may write a lot to disk.
( min( tmp_table_size, max_heap_table_size ) ) = (min( 1024M, 1024M )) / 32768M = 3.1% -- Percent of RAM to allocate when needing MEMORY table (per table), or temp table inside a SELECT (per temp table per some SELECTs). Too high may lead to swapping.
-- Decrease tmp_table_size (now 1073741824) and max_heap_table_size (now 1073741824) to, say, 1% of ram.
( character_set_server ) = character_set_server = latin1
-- Charset problems may be helped by setting character_set_server (now latin1) to utf8mb4. That is the future default.
( local_infile ) = local_infile = ON
-- local_infile (now ON) = ON is a potential security issue
( query_cache_size ) = 512M -- Size of QC
-- Too small = not of much use. Too large = too much overhead. Recommend either 0 or no more than 50M.
( Qcache_hits / (Qcache_hits + Com_select) ) = 8,821 / (8821 + 5602645) = 0.16% -- Hit ratio -- SELECTs that used QC
-- Consider turning off the query cache.
( (query_cache_size - Qcache_free_memory) / Qcache_queries_in_cache / query_alloc_block_size ) = (512M - 48787272) / 224183 / 16384 = 0.133 -- query_alloc_block_size vs formula
-- Adjust query_alloc_block_size (now 16384)
( tmp_table_size ) = 1024M -- Limit on size of MEMORY temp tables used to support a SELECT
-- Decrease tmp_table_size (now 1073741824) to avoid running out of RAM. Perhaps no more than 64M.
( Com_admin_commands / Queries ) = 888,691 / 6680823 = 13.3% -- Percent of queries that are "admin" commands.
-- What's going on?
( Slow_queries / Questions ) = 438,188 / 6557866 = 6.7% -- Frequency (% of all queries)
-- Find slow queries; check indexes.
( log_queries_not_using_indexes ) = log_queries_not_using_indexes = ON -- Whether to include such in slowlog.
-- This clutters the slowlog; turn it off so you can see the real slow queries. And decrease long_query_time (now 2) to catch most interesting queries.
( Uptime_since_flush_status ) = 451 = 7m 31s -- How long (in seconds) since FLUSH STATUS (or server startup).
-- GLOBAL STATUS has not been gathered long enough to get reliable suggestions for many of the issues. Fix what you can, then come back in a several hours.
( Max_used_connections / max_connections ) = 25 / 500 = 5.0% -- Peak % of connections
-- Since several memory factors can expand based on max_connections (now 500), it is good not to have that setting too high.
( thread_cache_size / Max_used_connections ) = 500 / 25 = 2000.0%
-- There is no advantage in having the thread cache bigger than your likely number of connections. Wasting space is the disadvantage.
Abnormally small:
Innodb_dblwr_pages_written / Innodb_dblwr_writes = 2.28
aria_checkpoint_log_activity = 1.05e+6
aria_pagecache_buffer_size = 128MB
innodb_buffer_pool_chunk_size = 128MB
innodb_max_undo_log_size = 10MB
innodb_online_alter_log_max_size = 128MB
innodb_sort_buffer_size = 1.05e+6
innodb_spin_wait_delay = 4
lock_wait_timeout = 86,400
performance_schema_max_mutex_classes = 0
query_cache_limit = 524,288
Abnormally large:
Acl_column_grants = 216
Acl_database_grants = 385
Acl_table_grants = 1,877
Innodb_buffer_pool_pages_free = 1.07e+6
Innodb_num_open_files = 9,073
Memory_used_initial = 8.16e+8
Open_table_definitions = 4,278
Open_tables = 4,439
Performance_schema_file_instances_lost = 1,732
Performance_schema_mutex_classes_lost = 190
Performance_schema_table_handles_lost = 570
Qcache_free_blocks = 9,122
Qcache_total_blocks = 457,808
Tc_log_page_size = 4,096
Uptime - Uptime_since_flush_status = 141,130
aria_sort_buffer_size = 256.0MB
auto_increment_offset = 3
gtid_domain_id = 12,000
innodb_open_files = 116,660
max_heap_table_size = 1024MB
max_relay_log_size = 1024MB
min(max_heap_table_size, tmp_table_size) = 1024MB
performance_schema_events_stages_history_size = 20
performance_schema_events_statements_history_size = 20
performance_schema_events_waits_history_size = 20
performance_schema_max_cond_classes = 90
table_definition_cache = 58,863
table_open_cache / max_connections = 233
tmp_memory_table_size = 1024MB
wsrep_cluster_size = 3
wsrep_gtid_domain_id = 12,000
wsrep_local_bf_aborts = 107
wsrep_slave_threads = 32
wsrep_thread_count = 33
Abnormal strings:
aria_recover_options = BACKUP,QUICK
disconnect_on_expired_password = OFF
gtid_ignore_duplicates = ON
gtid_strict_mode = ON
histogram_type = DOUBLE_PREC_HB
innodb_fast_shutdown = 1
myisam_stats_method = NULLS_UNEQUAL
old_alter_table = DEFAULT
opt_s__optimize_join_buffer_size = on
optimizer_trace = enabled=off
use_stat_tables = PREFERABLY_FOR_QUERIES
wsrep_cluster_status = Primary
wsrep_connected = ON
wsrep_debug = NONE
wsrep_gtid_mode = ON
wsrep_load_data_splitting = OFF
wsrep_provider = /usr/lib64/galera-4/libgalera_smm.so
wsrep_provider_name = Galera
wsrep_provider_options = base_dir = /var/lib/mysql/; base_host = FIRST_NODE_IP; base_port = 4567; cert.log_conflicts = no; cert.optimistic_pa = yes; debug = no; evs.auto_evict = 0; evs.causal_keepalive_period = PT1S; evs.debug_log_mask = 0x1; evs.delay_margin = PT1S; evs.delayed_keep_period = PT30S; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.info_log_mask = 0; evs.install_timeout = PT7.5S; evs.join_retrans_period = PT1S; evs.keepalive_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 4; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.use_aggregate = true; evs.user_send_window = 2; evs.version = 1; evs.view_forget_timeout = P1D; gcache.dir = /var/lib/mysql/; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = galera.cache; gcache.page_size = 128M; gcache.recover = yes; gcache.size = 1024M; gcomm.thread_prio = ; gcs.fc_debug = 0; gcs.fc_factor = 1.0; gcs.fc_limit = 16; gcs.fc_master_slave = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.listen_addr = tcp://0.0.0.0:4567; gmcast.mcast_addr = ; gmcast.mcast_ttl = 1; gmcast.peer_timeout = PT3S; gmcast.segment = 0; gmcast.time_wait = PT5S; gmcast.version = 0; ist.recv_addr = FIRST_NODE_IP; pc.announce_timeout = PT3S; pc.checksum = false; pc.ignore_quorum = false; pc.ignore_sb = false; pc.linger = PT20S; pc.npvo = false; pc.recovery = true; pc.version = 0; pc.wait_prim = true; pc.wait_prim_timeout = PT30S; pc.weight = 1; protonet.backend = asio; protonet.version = 0; repl.causal_read_timeout = PT30S; repl.commit_order = 3; repl.key_format = FLAT8; repl.max_ws_size = 2147483647; repl.proto_max = 10; socket.checksum = 2; socket.recv_buf_size = auto; socket.send_buf_size = auto;
wsrep_provider_vendor = Codership Oy
wsrep_provider_version = 26.4.4(r4599)
wsrep_replicate_myisam = ON
wsrep_sst_auth = ********
wsrep_sst_method = mariabackup
wsrep_start_position = 353e0616-cb37-11ea-b614-be241cab877e:39442474
None of these is necessarily too big, but there may be things going on that conspire to make them too big, especially when combined:
innodb_buffer_pool_size=24000M -- quick fix: lower this
(otherwise it should be a good size)
tmp_table_size = 1024M -- lower to 1% of RAM
max_heap_table_size = 1024M -- ditto
max_allowed_packet = 512M -- possibly too big
max_connections=500 -- lower to Max_used_connections or 100
query_cache_type = 1 -- 0 -- QC is not allowed on Galera
query_cache_size = 512M -- 0 -- ditto
table_open_cache = 116925 -- see how 2000 works
table_definition_cache = 58863 -- ditto
For further analysis, provide GLOBAL STATUS and VARIABLES a discussed here: http://mysql.rjweb.org/doc.php/mysql_analysis#tuning

MySQL 5.7 max_connections reset after reboot

We have a set of PXC clusters, each with slaves that we are using for reads. The slaves are running percona server 5.7 with " max_connections" and "max_user_connections" set to 4000 and 4050 respectively. But every time we reboot our slaves, these values are automatically reset to the default values out of the box causing a lot of performance issues. Is this a bug, or are we missing something in our config?
Below is our config file (SSD):
#
# Default values.
[mysqld_safe]
flush_caches
numa_interleave
#
#
[mysqld]
back_log = 65535
binlog_format = ROW
character_set_server = utf8
collation_server = utf8_general_ci
#core_file
datadir = /var/lib/mysql
default_storage_engine = InnoDB
enforce-gtid-consistency = 1
expand_fast_index_creation = 1
expire_logs_days = 2
gtid_mode = ON
innodb_autoinc_lock_mode = 2
innodb_buffer_pool_instances = 64
innodb_buffer_pool_populate = 1
innodb_buffer_pool_size = 67G #77G
innodb_data_file_path = ibdata1:64M;ibdata2:64M:autoextend
innodb_file_format = Barracuda
innodb_file_per_table
#innodb_flush_log_at_trx_commit = 2
innodb_flush_log_at_trx_commit = 0
innodb_flush_method = O_DIRECT
innodb_io_capacity = 20000
innodb_large_prefix
innodb_locks_unsafe_for_binlog = 1
#innodb_log_file_size = 64M
innodb_log_file_size = 1G
innodb_print_all_deadlocks = 1
innodb_read_io_threads = 64
innodb_stats_on_metadata = FALSE
innodb_support_xa = FALSE
innodb_write_io_threads = 64
log-bin = mysqld-bin
#log-queries-not-using-indexes
log-slave-updates
long_query_time = 1
master_info_repository = TABLE
max_allowed_packet = 64M
max_connect_errors = 4294967295
max_connections = 4000
max_user_connections = 4050
min_examined_row_limit = 1000
port = 3306
read-only = 1
relay_log_info_repository = TABLE
relay-log-recovery = TRUE
skip-name-resolve
slave_parallel_workers = 8
slow_query_log = 1
slow_query_log_timestamp_always = 1
table_open_cache = 4096
thread_cache = 1024
tmpdir = /srv/tmp
transaction_isolation = REPEATABLE-READ
updatable_views_with_limit = 0
user = mysql
wait_timeout = 60
userstat
#innodb_buffer_pool_load_at_startup=1
#innodb_buffer_pool_dump_at_shutdown=1
#skip_slave_start
#
##for grafana dashboard monitoring
#query_response_time_stats = on
userstat = 1
server-id = 1019244
Goto /etc/my.cnf and set
max_connections=1000
Then restart mysql

Tuning MySql my.cnf for Many Users

I'm looking for some help tuning our my.cnf file to handle many current users (about 20 orders/minute). This particular site is in WordPress and running Woocommerce.
After a lot of reading online, I've come up with the settings below. The server is Debian 8 with 12 CPUs and 48GB RAM.
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
key_buffer = 2G
max_allowed_packet = 512M
thread_cache_size = 256K
tmp_table_size = 4G
max_heap_table_size = 4G
table_cache = 1024
table_definition_cache = 1024
myisam_recover = FORCE,BACKUP
max_connections = 300
wait_timeout = 120
connect_timeout = 120
interactive_timeout = 120
innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = 1
innodb_buffer_pool_size = 2G
innodb_io_capacity = 1000
innodb_read_io_threads = 32
innodb_thread_concurrency = 0
innodb_write_io_threads = 32
It seems to be running pretty good for now. Any additional thoughts? Thanks for your input!
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
See https://dom.as/tech/query-cache-tuner/
key_buffer = 2G
Key buffer is only for MyISAM. You shouldn't use MyISAM.
max_allowed_packet = 512M
thread_cache_size = 256K
tmp_table_size = 4G
max_heap_table_size = 4G
4G is probably way too high for the tmp table and heap table sizes. Keep in mind multiple threads can create temp tables concurrently.
table_cache = 1024
table_definition_cache = 1024
Probably overkill.
myisam_recover = FORCE,BACKUP
Also used only for MyISAM.
max_connections = 300
What does show global status like 'max_used_connections' say? Is it close to max_connections?
wait_timeout = 120
connect_timeout = 120
interactive_timeout = 120
innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = 1
innodb_buffer_pool_size = 2G
Fine but with 48G of RAM, you can probably increase the buffer pool size.
Run show engine innodb status and look for these lines:
Buffer pool size 131072
Free buffers 0
Database pages 128000
Is your buffer pool always pegged full? If so, increase its size.
What's the total size of your database? You don't need the buffer pool to be larger than the total size of your data+indexes, but large enough to hold the frequently-accessed pages would be good.
select round(sum(data_length+index_length)/1024/1024, 2) as mb
from information_schema.tables where engine='InnoDB'
innodb_io_capacity = 1000
innodb_read_io_threads = 32
innodb_thread_concurrency = 0
innodb_write_io_threads = 32
The io capacity might be greater than the ability of your disks to keep up. You didn't describe what your disk system is.
The io threads is way overkill for a Wordpress site with the traffic you describe. Run show engine innodb status and look for this line:
Pending normal aio reads: [0, 0, 0, 0] , aio writes: [0, 0, 0, 0] ,
If you always see 0's on that line, you don't need more than the default 4 & 4 io threads.

MySQL TokuDB engine using too much CPU

I have converted tables of a database from InnoDB to TokuDB and i noticed that with TokuDB, reads are using way too much CPU. Why is this?
To be more specific, the server with TokuDB tables is a slave of a server with InnoDB which is part of the PXC. The slave just used regular percona server and not PXC. But the slave seems to be using way too much CPU and i do not know why?
Below is my my.cnf config:
[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock
[mysqld_safe]
thp-setting=never
socket = /var/run/mysqld/mysqld.sock
nice = 0
flush_caches
numa_interleave
core-file-size = unlimited
open_files_limit = 1024
[mysqld]
back_log = 65535
bind-address = 0.0.0.0
binlog_format = ROW
character_set_server = utf8
collation_server = utf8_general_ci
core_file
basedir = /usr
datadir = /var/lib/mysql
#default_storage_engine = InnoDB
enforce-gtid-consistency = 1
expand_fast_index_creation = 1
expire_logs_days = 7
gtid_mode = ON
innodb_autoinc_lock_mode = 2
innodb_buffer_pool_instances = 1
innodb_buffer_pool_populate = 1
innodb_buffer_pool_size = 512M
innodb_data_file_path = ibdata1:64M;ibdata2:64M:autoextend
innodb_file_format = Barracuda
innodb_file_per_table
innodb_force_recovery = 1
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
innodb_io_capacity = 1600
innodb_large_prefix
innodb_locks_unsafe_for_binlog = 1
innodb_log_file_size = 64M
innodb_print_all_deadlocks = 1
innodb_read_io_threads = 64
innodb_stats_on_metadata = FALSE
innodb_support_xa = FALSE
innodb_write_io_threads = 64
lc-messages-dir = /usr/share/mysql
log-bin = mysqld-bin
log-queries-not-using-indexes
log-slave-updates
long_query_time = 1
master_info_repository = TABLE
max_allowed_packet = 64M
max_connect_errors = 4294967295
max_connections = 2500
max_user_connections = 2550
min_examined_row_limit = 1000
open_files_limit = 1024
port = 3306
relay_log_info_repository = TABLE
relay-log-recovery = TRUE
relay-log-recovery = 1
skip-external-locking
skip-name-resolve
slave_parallel_workers = 8
slow_query_log = 1
slow_query_log_timestamp_always = 1
socket = /var/run/mysqld/mysqld.sock
table_open_cache = 4096
thread_cache = 1024
tmpdir = /srv/tmp
transaction_isolation = REPEATABLE-READ
updatable_views_with_limit = 0
user = mysql
wait_timeout = 60
server-id = 2
# TokuDB fine tuning
default_storage_engine = TokuDB
tokudb_analyze_time = 5
#tokudb_cache_size = 6G
tokudb_directio = 1
tokudb_commit_sync = 0
tokudb_fsync_log_period = 1000
tokudb_load_save_space =1
tokudb_alter_print_error=0
tokudb_block_size = 4MB
tokudb_bulk_fetch = 1
tokudb_disable_slow_alter = 1
tokudb_last_lock_timeout = empty
tokudb_row_format = tokudb_quicklz
#tokudb_data_dir = /var/lib/tokudb
[mysqldump]
quick
quote-names
max_allowed_packet = 16M
[mysql]
#no-auto-rehash # faster start of mysql but no tab completition
[isamchk]
key_buffer = 16M
!includedir /etc/mysql/conf.d/
The following replication message was being reported by our monitoring system xymon when tokudb_cache_size when initially set to 80% of total RAM.
2016-02-25 16:42:04 9604 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=db-kdb-slave-6-relay-bin' to avoid this problem.
2016-02-25 16:42:05 9604 [Warning] Recovery from master pos 552554502 and file mysqld-bin.001163. Previous relay log pos and relay log file had been set to 552554714, ./db-kdb-slave-6-relay-bin.002933 respectively.
2016-02-25 16:42:05 9604 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.
------More info about the Master server running InnoDB and part of PXC-----------
## Results from top
top - 10:05:12 up 14 days, 7:56, 2 users, load average: 2.16, 2.31, 2.39
Tasks: 413 total, 1 running, 412 sleeping, 0 stopped, 0 zombie
%Cpu(s): 8.9 us, 0.6 sy, 0.0 ni, 89.9 id, 0.3 wa, 0.0 hi, 0.2 si, 0.0 st
KiB Mem: 65704012 total, 63553216 used, 2150796 free, 169832 buffers
KiB Swap: 975868 total, 809892 used, 165976 free. 16304268 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2485 mysql 20 0 60.146g 0.045t 2.612g S 314.9 73.3 27762:43 mysqld
## disk info
george#db-erp-3:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 32G 8.0K 32G 1% /dev
tmpfs 6.3G 1.2M 6.3G 1% /run
/dev/sda2 274G 2.1G 258G 1% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 32G 0 32G 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/nvme0n1p1 1.1T 542G 503G 52% /srv
na1:/vol/yphome 4.5T 3.7T 875G 82% /net/account
## Memory info
george#db-erp-3:~$ free -g
total used free shared buffers cached
Mem: 62 60 2 0 0 15
-/+ buffers/cache: 44 17
Swap: 0 0 0
george#db-erp-3:~$
## Database info
+--------------------+----------------------+
| Data Base Name | Data Base Size in MB |
+--------------------+----------------------+
| information_schema | 0.00976563 |
| dberp | 347143.32031250 |
| mysql | 2.11562061 |
| performance_schema | 0.00000000 |
+--------------------+----------------------+
4 rows in set (0.13 sec)
+--------------------+----------------------+------------------+
| Data Base Name | Data Base Size in MB | Free Space in MB |
+--------------------+----------------------+------------------+
| information_schema | 0.00976563 | 0.00000000 |
| dberp | 347143.32031250 | 6270.00000000 |
| mysql | 2.11562061 | 4.00199127 |
| performance_schema | 0.00000000 | 0.00000000 |
+--------------------+----------------------+------------------+
4 rows in set (0.03 sec)
Your CPU will be higher for reads because TokuDB data needs to be decompressed to be used. Also, if this slave is processing any activity from the master than it's also doing compression for the insert/update/delete activity.
Couple of ideas.
1. Reduce the value of tokudb_block_size. While 4MB is great for compression it means that your point queries need to decompress a lot more data than they have to. Try using 256KB and see how CPU and performance changes. You might have to rebuild your slave to accomplish this easily (I'm now over a year away from working at TokuDB).
2. Look at your tokudb_cache_size. It defaults to 50% of RAM, but if nothing else is on this server you should up it to somewhere between 75% and 80%. This will mean less reads and decompression since more data will be in your cache.