I'm looking for some help tuning our my.cnf file to handle many current users (about 20 orders/minute). This particular site is in WordPress and running Woocommerce.
After a lot of reading online, I've come up with the settings below. The server is Debian 8 with 12 CPUs and 48GB RAM.
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
key_buffer = 2G
max_allowed_packet = 512M
thread_cache_size = 256K
tmp_table_size = 4G
max_heap_table_size = 4G
table_cache = 1024
table_definition_cache = 1024
myisam_recover = FORCE,BACKUP
max_connections = 300
wait_timeout = 120
connect_timeout = 120
interactive_timeout = 120
innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = 1
innodb_buffer_pool_size = 2G
innodb_io_capacity = 1000
innodb_read_io_threads = 32
innodb_thread_concurrency = 0
innodb_write_io_threads = 32
It seems to be running pretty good for now. Any additional thoughts? Thanks for your input!
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
See https://dom.as/tech/query-cache-tuner/
key_buffer = 2G
Key buffer is only for MyISAM. You shouldn't use MyISAM.
max_allowed_packet = 512M
thread_cache_size = 256K
tmp_table_size = 4G
max_heap_table_size = 4G
4G is probably way too high for the tmp table and heap table sizes. Keep in mind multiple threads can create temp tables concurrently.
table_cache = 1024
table_definition_cache = 1024
Probably overkill.
myisam_recover = FORCE,BACKUP
Also used only for MyISAM.
max_connections = 300
What does show global status like 'max_used_connections' say? Is it close to max_connections?
wait_timeout = 120
connect_timeout = 120
interactive_timeout = 120
innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = 1
innodb_buffer_pool_size = 2G
Fine but with 48G of RAM, you can probably increase the buffer pool size.
Run show engine innodb status and look for these lines:
Buffer pool size 131072
Free buffers 0
Database pages 128000
Is your buffer pool always pegged full? If so, increase its size.
What's the total size of your database? You don't need the buffer pool to be larger than the total size of your data+indexes, but large enough to hold the frequently-accessed pages would be good.
select round(sum(data_length+index_length)/1024/1024, 2) as mb
from information_schema.tables where engine='InnoDB'
innodb_io_capacity = 1000
innodb_read_io_threads = 32
innodb_thread_concurrency = 0
innodb_write_io_threads = 32
The io capacity might be greater than the ability of your disks to keep up. You didn't describe what your disk system is.
The io threads is way overkill for a Wordpress site with the traffic you describe. Run show engine innodb status and look for this line:
Pending normal aio reads: [0, 0, 0, 0] , aio writes: [0, 0, 0, 0] ,
If you always see 0's on that line, you don't need more than the default 4 & 4 io threads.
Related
please , what is the best configuration to Innodb because mysql eat CPU on linux server
this is my configuration but I don`t sure it matches best solution
innodb_buffer_pool_size =24G
innodb_log_file_size = 1G
innodb_buffer_pool_instances = 4
join_buffer_size = 1G
max_heap_table_size = 1G
thread_cache_size = 32
#max_allowed_packet = 1600M
max_allowed_packet = 100M
tmp_table_size = 1G
innodb_buffer_pool_chunk_size=3G
We are running multiple scripts that calculates data. The script was written in PHP(Laravel) and they were triggered by a cron(every minute). What I have noticed is when the cron triggers more than 300 of those process, mysql crashes "Connection Refused". This issue doesn't happen when there are less than 300 processes. I already have increased max_connections to 1000 and back_log t0 500 in my.cnf but the issue still persists(of course MySQL service was restarted). I have seen someone say that back_log is also at OS level but I can't find any article on how to adjust it. Any thoughts?
Here's some config values:
max_connections = 1000
back_log = 500
connect_timeout = 5
wait_timeout = 600
max_allowed_packet = 64M
thread_cache_size = 256
sort_buffer_size = 128M
bulk_insert_buffer_size = 128M
tmp_table_size = 128M
max_heap_table_size = 1G
myisam_recover_options = BACKUP
key_buffer_size = 128M
#open-files-limit = 2000
table_open_cache = 400
myisam_sort_buffer_size = 512M
concurrent_insert = 2
read_buffer_size = 8M
read_rnd_buffer_size = 4M
innodb_buffer_pool_size = 80G
innodb_log_buffer_size = 256M
innodb_file_per_table = 1
innodb_open_files = 400
innodb_io_capacity = 400
innodb_flush_method = O_DIRECT
I think it could be a performance issue, Did you monitor the mysql server cpu usage?
we have installed mariadb along with columnstore engine and from the last few weeks we are facing memory chocking issue where memory getting chocked and all our DML/DDL operations are getting stuck, after restarting the services it gets fixed.
below are the stats :
total used free shared buff/cache available
Mem: 15 2 7 0 5 12
Swap: 4 0 4
[mysqld]
port = 3306
socket = /opt/evolv/mariadb/columnstore/mysql/lib/mysql/mysql.sock
datadir = /opt/evolv/mariadb/columnstore/mysql/db
skip-external-locking
key_buffer_size = 512M
max_allowed_packet = 1M
table_cache = 512
sort_buffer_size = 64M
read_buffer_size = 64M
read_rnd_buffer_size = 512M
myisam_sort_buffer_size = 64M
thread_cache_size = 8
query_cache_size = 0
# Try number of CPU's*2 for thread_concurrency
#thread_concurrency = 8
thread_stack = 512K
lower_case_table_names=1
group_concat_max_len=512
infinidb_use_import_for_batchinsert=1
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 8192M
#innodb_additional_mem_pool_size = 20M
# Set .._log_file_size to 25 % of buffer pool size
#innodb_log_file_size = 100M
#innodb_log_buffer_size = 8M
#innodb_flush_log_at_trx_commit = 1
#innodb_lock_wait_timeout = 50
Here's an analysis of the VARIABLES and (suspicious) GLOBAL STATUS; nothing exciting:
Observations:
Version: 10.1.26-MariaDB
15 GB of RAM
Uptime = 03:04:25; Please rerun SHOW GLOBAL STATUS after several hours.
Are you sure this was a SHOW GLOBAL STATUS ?
You are not running on Windows.
Running 64-bit version
You appear to be running entirely (or mostly) InnoDB.
The More Important Issues:
Uptime = 03:04:25; Please rerun SHOW GLOBAL STATUS after several hours.
Are you sure this was a SHOW GLOBAL STATUS ?
key_buffer_size is excessively large (3G). If you don't need MyISAM for anything, set it to 50M.
Check infinidb_um_mem_limit to see if it makes sense for your application.
Suggest lowering innodb_buffer_pool_size to 2G until the "choking" is figured out.
Details and other observations:
( (key_buffer_size - 1.2 * Key_blocks_used * 1024) / _ram ) = (3072M - 1.2 * 0 * 1024) / 15360M = 20.0% -- Percent of RAM wasted in key_buffer.
-- Decrease key_buffer_size.
( Key_blocks_used * 1024 / key_buffer_size ) = 0 * 1024 / 3072M = 0 -- Percent of key_buffer used. High-water-mark.
-- Lower key_buffer_size to avoid unnecessary memory usage.
( innodb_buffer_pool_size / _ram ) = 6144M / 15360M = 40.0% -- % of RAM used for InnoDB buffer_pool
( Innodb_buffer_pool_pages_free * 16384 / innodb_buffer_pool_size ) = 392,768 * 16384 / 6144M = 99.9% -- buffer pool free
( innodb_print_all_deadlocks ) = innodb_print_all_deadlocks = OFF -- Whether to log all Deadlocks.
-- If you are plagued with Deadlocks, turn this on. Caution: If you have lots of deadlocks, this may write a lot to disk.
( local_infile ) = local_infile = ON
-- local_infile = ON is a potential security issue
( expire_logs_days ) = 0 -- How soon to automatically purge binlog (after this many days)
-- Too large (or zero) = consumes disk space; too small = need to respond quickly to network/machine crash.
(Not relevant if log_bin = OFF)
( long_query_time ) = 5 -- Cutoff (Seconds) for defining a "slow" query.
-- Suggest 2
Abnormally large:
read_buffer_size = 32MB
Acl_database_grants = 780
Acl_proxy_users = 4
Acl_users = 281
Columstore.xml
95% of all memory??
<MemoryCheckPercent>95</MemoryCheckPercent> <!-- Max real memory to limit growth of buffers to -->
<DataFileLog>OFF</DataFileLog>
I guess this is not relevant, since it is commented out??
<!-- enable if you want to limit how much memory may be used for hdfs read/write memory buffers.
<hdfsRdwrBufferMaxSize>8G</hdfsRdwrBufferMaxSize>
-->
Keep in mind that MySQL, other than Columnstore, is consuming a lot of memory:
<TotalUmMemory>25%</TotalUmMemory>
<TotalPmUmMemory>10%</TotalPmUmMemory>
I have problem with my production mysql server. Usage of memory still grow up and I don`t know why. Trouble started when we changed the server.
My mysql version: 5.5.44-0+deb8u1-log - (Debian).
My my.cnf file:
[mysqld_safe]
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
nice = 0
[mysqld]
skip-external-locking
key_buffer_size = 5M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
tmp_table_size = 384M
max_heap_table_size = 384M
table_open_cache = 7000
open_files_limit = 14000
interactive_timeout=3600
wait_timeout=3600
myisam-recover_options = BACKUP
max_connections = 150
query_cache_limit = 8M
query_cache_size = 127M
slow_query_log_file = /var/log/mysql/mysql-slow.log
slow_query_log = 1
long_query_time = 2
expire_logs_days = 10
max_binlog_size = 100M
innodb_file_per_table
innodb_buffer_pool_instances = 9
innodb_buffer_pool_size = 10000M
innodb_log_file_size = 2000M
It is something wrong in my configuration?
EDIT
For production we have dedicated server:
4 x Intel(R) Xeon(R) CPU E5-2680 v2 # 2.80GHz
20 GB RAM memory
40 GB na DHH - data base have now 15 GB
in the data base is on this moment 650 tables on the innoDB engine
Screenshot of htop:
htop of proces on server
I have a web server with Apache and MySQL running on AWS EC2 t2.small with Windows 2012 Server. AWS EC2 t2.small characteristics:
RAM 2 GB (used 65%)
1 CPU 2.50 GHz (used 1%)
Now MySQL process (mysqld.exe) uses 400 MB of RAM (too much for me).
MySQL current settings are (my.ini):
key_buffer = 16M
max_allowed_packet = 16M
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M
tmp-table-size = 32M
max-heap-table-size = 32M
max-connections = 500
thread-cache-size = 50
open-files-limit = 65535
table-definition-cache = 1024
table-open-cache = 2048
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
innodb-log-files-in-group = 2
innodb-log-file-size = 64M
innodb-flush-log-at-trx-commit = 1
innodb-file-per-table = 1
innodb_buffer_pool_size = 128M
innodb_additional_mem_pool_size = 2M
innodb_log_file_size = 5M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50
Database is formed by 20 InnoDB tables and they are composed with 5/10 columns. The server has a low traffic.
How can I optimize my settings to be suitable with EC2 t2.small (2GB RAM)?
You have innodb_buffer_pool_size twice in your config. It should be with underscores, but check which one gets used with:
show variables like 'innodb_buffer_pool_size';
You could try halving innodb_buffer_pool_size and query_cache_size. Try if performance is ok with query_cache_size=0 too.