Contiunous rise of mysqld memory consumption - mysql

Mysqld memory consumption rises forever and never seems to be freed. It starts out about 6GB but gradually rises to around 10GB in a few weeks, and of the 10GB only 4GB and 50MB is being used by the buffer pool and the dictionary respectively when checking innodb status.
This is with MySQL 5.6.16 on a server with 12GB memory. A few of the tables are partioned and there are roughly 8000 idb files. Also one table is created each day.
I have tried to FLUSH TABLES with no success. The tables are closed but the memory does not get freed at all. In fact, more memory gets consumed.
Why is the memory being consumed?
And are there any known issues with memory not being freed when using partioned tables?
my.cnf
query_cache_size = 512M
query_cache_limit = 16M
max_allowed_packet = 16M
table_open_cache = 1024
sort_buffer_size = 2M
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 2MB
myisam_sort_buffer_size = 1M
max_connections = 1024
thread_cache = 1024
tmp_table_size = 16M
max_heap_table_size = 16M
wait_timeout = 20
join_buffer_size = 256KB
thread_cache_size = 50
table_definition_cache = 400
key_buffer_size = 256M

First, you should know that once MySQL uses memory, it never frees it. It keeps it around for the next query.
Here's my query for figuring out the maximum amount of memory my InnoDB DB will use:
SELECT ( ##key_buffer_size
+ ##query_cache_size
+ ##innodb_buffer_pool_size
+ ##innodb_log_buffer_size
+ ##max_allowed_packet
+ ##max_connections * ( ##read_buffer_size
+ ##read_rnd_buffer_size
+ ##sort_buffer_size
+ ##join_buffer_size
+ ##binlog_cache_size
+ 2*##net_buffer_length
+ ##thread_stack
+ ##tmp_table_size )
) / (1024 * 1024 * 1024) AS MAX_MEMORY_GB;
To see a breakdown of how much memory is being used for different things:
-- To see a breakdown of what's using how much memory:
SELECT ##key_buffer_size/(1024*1024) as `key_buffer_size_IN_MB`,
##query_cache_size/(1024*1024) as 'query_cache_size_IN_MB',
##innodb_buffer_pool_size/(1024*1024) as 'innodb_buffer_pool_size_IN_MB',
##innodb_log_buffer_size/(1024*1024) as 'innodb_log_buffer_size_IN_MB',
##max_connections*##read_buffer_size/(1024*1024) as '##read_buffer_size_IN_MB',
##max_connections*##read_rnd_buffer_size/(1024*1024) as 'read_rnd_buffer_size_IN_MB',
##max_connections*##sort_buffer_size/(1024*1024) as 'sort_buffer_size_IN_MB',
##max_connections*##join_buffer_size/(1024*1024) as 'join_buffer_size_IN_MB',
##max_connections*##binlog_cache_size/(1024*1024) as 'binlog_cache_size_IN_MB',
##max_connections*##thread_stack/(1024*1024) as 'thread_stack_IN_MB',
##max_connections*##tmp_table_size/(1024*1024) as 'tmp_table_size_IN_MB',
##max_connections*##net_buffer_length*2/(1024*1024) as 'net_buffer_size_IN_MB'
;

Related

MySQL InnoDB Disk Writes increase suddenly after 2.5 hours

MySQL version = 5.7.31
We started noticing high CPU utilization in our DB server after 2.5 hours of heavy work load (roughly 800 selects per second). DB was performing quite well, and all of a sudden InnoDB Disk Writes increase significantly, followed by InnoDB Disk Reads. Select count drops to zero at this point making the application useless.
After about 15 mins the DB starts working normally.
configuration as follows
innodb_flush_method=O_DIRECT
innodb_log_file_size=1G
innodb_numa_interleave
innodb_buffer_pool_size=75G
key_buffer_size = 12G
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
tmp_table_size = 1024M
max_heap_table_size = 1024M
max_connections = 600
max_connect_errors = 10000
query_cache_limit = 1M
query_cache_size = 50M
htop: https://ibb.co/gwGSkc1 - (Before the issue)
iostat: https://ibb.co/YyJWkb9 - (Before the issue)
df -h : https://ibb.co/x25vg52
RAM 94G
CORE COUNT 32
SSD : /var/lib/mysql is mounted on a SSD Volume (Solution is hosted on open stack)
GLOBAL STATUS : https://pastebin.com/yC4FUYiE
GLOBAL Variables : https://pastebin.com/PfsYTRbm
PROCESS LIST : https://pastebin.com/TyA5KBDb
Rate Per Second = RPS
Suggestions to consider for your my.cnf [mysqld] section
innodb_lru_scan_depth=100 # from 1024 to conserve 90% of CPU cycles used for function
innodb_io_capacity=1500 # from 200 to use more of your available SSD IOPS
read_rnd_buffer_size=128K # from 256K to reduce handler_read_rnd_next RPS of 474,918
key_buffer_size=16M # from 12G less than 1% used and your max is 94G available.
For additional suggestions view profile, Network profile for contact info and free downloadable Utility Scripts to assist with performance tuning.
There are many more opportunities to improve your configuration.
Not much exciting in the settings:
Analysis of GLOBAL STATUS and VARIABLES:
Observations:
Version: 5.7.31
94 GB of RAM
Uptime = 17:36:15; some GLOBAL STATUS values may not be meaningful yet.
You are not running on Windows.
Running 64-bit version
You appear to be running entirely (or mostly) InnoDB.
The More Important Issues:
MyISAM is not used, so key_buffer_size = 12G is a waste of RAM. Change to 50M.
If you have SSD drives, increase innodb_io_capacity from 200 to 1000.
Several metrics point to inefficient queries. They may need better indexes or rewriting. See http://mysql.rjweb.org/doc.php/mysql_analysis#slow_queries_and_slowlog
Details and other observations:
( key_buffer_size ) = 12,288M / 96256M = 12.8% -- % of RAM used for key_buffer (for MyISAM indexes)
-- 20% is ok if you are not using InnoDB.
( (key_buffer_size - 1.2 * Key_blocks_used * 1024) ) = ((12288M - 1.2 * 9 * 1024)) / 96256M = 12.8% -- Percent of RAM wasted in key_buffer.
-- Decrease key_buffer_size (now 12884901888).
( Key_blocks_used * 1024 / key_buffer_size ) = 9 * 1024 / 12288M = 0.00% -- Percent of key_buffer used. High-water-mark.
-- Lower key_buffer_size (now 12884901888) to avoid unnecessary memory usage.
( (key_buffer_size / 0.20 + innodb_buffer_pool_size / 0.70) ) = ((12288M / 0.20 + 76800M / 0.70)) / 96256M = 177.8% -- Most of available ram should be made available for caching.
-- http://mysql.rjweb.org/doc.php/memory
( innodb_lru_scan_depth * innodb_page_cleaners ) = 1,024 * 4 = 4,096 -- Amount of work for page cleaners every second.
-- "InnoDB: page_cleaner: 1000ms intended loop took ..." may be fixable by lowering lru_scan_depth: Consider 1000 / innodb_page_cleaners (now 4). Also check for swapping.
( innodb_page_cleaners / innodb_buffer_pool_instances ) = 4 / 8 = 0.5 -- innodb_page_cleaners
-- Recommend setting innodb_page_cleaners (now 4) to innodb_buffer_pool_instances (now 8)
(Beginning to go away in 10.5)
( innodb_lru_scan_depth ) = 1,024
-- "InnoDB: page_cleaner: 1000ms intended loop took ..." may be fixed by lowering lru_scan_depth
( Innodb_buffer_pool_pages_free / Innodb_buffer_pool_pages_total ) = 1,579,794 / 4914600 = 32.1% -- Pct of buffer_pool currently not in use
-- innodb_buffer_pool_size (now 80530636800) is bigger than necessary?
( innodb_io_capacity_max / innodb_io_capacity ) = 2,000 / 200 = 10 -- Capacity: max/plain
-- Recommend 2. Max should be about equal to the IOPs your I/O subsystem can handle. (If the drive type is unknown 2000/200 may be a reasonable pair.)
( Innodb_os_log_written / (Uptime / 3600) / innodb_log_files_in_group / innodb_log_file_size ) = 138,870,272 / (63375 / 3600) / 2 / 1024M = 0.00367 -- Ratio
-- (see minutes)
( Uptime / 60 * innodb_log_file_size / Innodb_os_log_written ) = 63,375 / 60 * 1024M / 138870272 = 8,166 -- Minutes between InnoDB log rotations Beginning with 5.6.8, this can be changed dynamically; be sure to also change my.cnf.
-- (The recommendation of 60 minutes between rotations is somewhat arbitrary.) Adjust innodb_log_file_size (now 1073741824). (Cannot change in AWS.)
( innodb_flush_method ) = innodb_flush_method = O_DSYNC -- How InnoDB should ask the OS to write blocks. Suggest O_DIRECT or O_ALL_DIRECT (Percona) to avoid double buffering. (At least for Unix.) See chrischandler for caveat about O_ALL_DIRECT
( innodb_flush_neighbors ) = 1 -- A minor optimization when writing blocks to disk.
-- Use 0 for SSD drives; 1 for HDD.
( innodb_io_capacity ) = 200 -- I/O ops per second capable on disk . 100 for slow drives; 200 for spinning drives; 1000-2000 for SSDs; multiply by RAID factor.
( innodb_print_all_deadlocks ) = innodb_print_all_deadlocks = OFF -- Whether to log all Deadlocks.
-- If you are plagued with Deadlocks, turn this on. Caution: If you have lots of deadlocks, this may write a lot to disk.
( min( tmp_table_size, max_heap_table_size ) ) = (min( 1024M, 1024M )) / 96256M = 1.1% -- Percent of RAM to allocate when needing MEMORY table (per table), or temp table inside a SELECT (per temp table per some SELECTs). Too high may lead to swapping.
-- Decrease tmp_table_size (now 1073741824) and max_heap_table_size (now 1073741824) to, say, 1% of ram.
( character_set_server ) = character_set_server = latin1
-- Charset problems may be helped by setting character_set_server (now latin1) to utf8mb4. That is the future default.
( local_infile ) = local_infile = ON
-- local_infile (now ON) = ON is a potential security issue
( Created_tmp_disk_tables / Created_tmp_tables ) = 59,659 / 68013 = 87.7% -- Percent of temp tables that spilled to disk
-- Check slowlog
( tmp_table_size ) = 1024M -- Limit on size of MEMORY temp tables used to support a SELECT
-- Decrease tmp_table_size (now 1073741824) to avoid running out of RAM. Perhaps no more than 64M.
( (Com_insert + Com_update + Com_delete + Com_replace) / Com_commit ) = (53844 + 35751 + 1 + 0) / 35789 = 2.5 -- Statements per Commit (assuming all InnoDB)
-- Low: Might help to group queries together in transactions; High: long transactions strain various things.
( Select_range_check ) = 70,106 / 63375 = 1.1 /sec -- no good index
-- Find slow queries; check indexes.
( Select_scan ) = 2,393,389 / 63375 = 38 /sec -- full table scans
-- Add indexes / optimize queries (unless they are tiny tables)
( Select_scan / Com_select ) = 2,393,389 / 10449190 = 22.9% -- % of selects doing full table scan. (May be fooled by Stored Routines.)
-- Add indexes / optimize queries
( Sort_merge_passes ) = 18,868 / 63375 = 0.3 /sec -- Heafty sorts
-- Increase sort_buffer_size (now 262144) and/or optimize complex queries.
( slow_query_log ) = slow_query_log = OFF -- Whether to log slow queries. (5.1.12)
( long_query_time ) = 10 -- Cutoff (Seconds) for defining a "slow" query.
-- Suggest 2
( log_slow_slave_statements ) = log_slow_slave_statements = OFF -- (5.6.11, 5.7.1) By default, replicated statements won't show up in the slowlog; this causes them to show.
-- It can be helpful in the slowlog to see writes that could be interfering with Replica reads.
( Aborted_connects / Connections ) = 1,057 / 2070 = 51.1% -- Perhaps a hacker is trying to break in? (Attempts to connect)
( max_connect_errors ) = 10,000 -- A small protection against hackers.
-- Perhaps no more than 200.
You have the Query Cache half-off. You should set both query_cache_type = OFF and query_cache_size = 0 . There is (according to a rumor) a 'bug' in the QC code that leaves some code on unless you turn off both of those settings.
Abnormally small:
Innodb_os_log_fsyncs = 0
innodb_buffer_pool_chunk_size = 128MB
innodb_online_alter_log_max_size = 128MB
innodb_sort_buffer_size = 1.05e+6
Abnormally large:
(Com_select + Qcache_hits) / (Com_insert + Com_update + Com_delete + Com_replace) = 116
Com_create_procedure = 0.11 /HR
Com_drop_procedure = 0.11 /HR
Com_show_charsets = 0.68 /HR
Com_show_plugins = 0.11 /HR
Created_tmp_files = 0.6 /sec
Innodb_buffer_pool_bytes_data = 838452 /sec
Innodb_buffer_pool_pages_data = 3.24e+6
Innodb_buffer_pool_pages_free = 1.58e+6
Innodb_buffer_pool_pages_total = 4.91e+6
Key_blocks_unused = 1.02e+7
Ssl_default_timeout = 7,200
Ssl_session_cache_misses = 10
Ssl_verify_depth = 1.84e+19
Ssl_verify_mode = 5
max_heap_table_size = 1024MB
min(max_heap_table_size, tmp_table_size) = 1024MB
Abnormal strings:
ft_boolean_syntax = + -><()~*:&
innodb_fast_shutdown = 1
innodb_numa_interleave = ON
optimizer_trace = enabled=off,one_line=off
optimizer_trace_features = greedy_search=on, range_optimizer=on, dynamic_range=on, repeated_subselect=on
slave_rows_search_algorithms = TABLE_SCAN,INDEX_SCAN

MariaDB / Columnstore engine Memory getting chocked

we have installed mariadb along with columnstore engine and from the last few weeks we are facing memory chocking issue where memory getting chocked and all our DML/DDL operations are getting stuck, after restarting the services it gets fixed.
below are the stats :
total used free shared buff/cache available
Mem: 15 2 7 0 5 12
Swap: 4 0 4
[mysqld]
port = 3306
socket = /opt/evolv/mariadb/columnstore/mysql/lib/mysql/mysql.sock
datadir = /opt/evolv/mariadb/columnstore/mysql/db
skip-external-locking
key_buffer_size = 512M
max_allowed_packet = 1M
table_cache = 512
sort_buffer_size = 64M
read_buffer_size = 64M
read_rnd_buffer_size = 512M
myisam_sort_buffer_size = 64M
thread_cache_size = 8
query_cache_size = 0
# Try number of CPU's*2 for thread_concurrency
#thread_concurrency = 8
thread_stack = 512K
lower_case_table_names=1
group_concat_max_len=512
infinidb_use_import_for_batchinsert=1
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 8192M
#innodb_additional_mem_pool_size = 20M
# Set .._log_file_size to 25 % of buffer pool size
#innodb_log_file_size = 100M
#innodb_log_buffer_size = 8M
#innodb_flush_log_at_trx_commit = 1
#innodb_lock_wait_timeout = 50
Here's an analysis of the VARIABLES and (suspicious) GLOBAL STATUS; nothing exciting:
Observations:
Version: 10.1.26-MariaDB
15 GB of RAM
Uptime = 03:04:25; Please rerun SHOW GLOBAL STATUS after several hours.
Are you sure this was a SHOW GLOBAL STATUS ?
You are not running on Windows.
Running 64-bit version
You appear to be running entirely (or mostly) InnoDB.
The More Important Issues:
Uptime = 03:04:25; Please rerun SHOW GLOBAL STATUS after several hours.
Are you sure this was a SHOW GLOBAL STATUS ?
key_buffer_size is excessively large (3G). If you don't need MyISAM for anything, set it to 50M.
Check infinidb_um_mem_limit to see if it makes sense for your application.
Suggest lowering innodb_buffer_pool_size to 2G until the "choking" is figured out.
Details and other observations:
( (key_buffer_size - 1.2 * Key_blocks_used * 1024) / _ram ) = (3072M - 1.2 * 0 * 1024) / 15360M = 20.0% -- Percent of RAM wasted in key_buffer.
-- Decrease key_buffer_size.
( Key_blocks_used * 1024 / key_buffer_size ) = 0 * 1024 / 3072M = 0 -- Percent of key_buffer used. High-water-mark.
-- Lower key_buffer_size to avoid unnecessary memory usage.
( innodb_buffer_pool_size / _ram ) = 6144M / 15360M = 40.0% -- % of RAM used for InnoDB buffer_pool
( Innodb_buffer_pool_pages_free * 16384 / innodb_buffer_pool_size ) = 392,768 * 16384 / 6144M = 99.9% -- buffer pool free
( innodb_print_all_deadlocks ) = innodb_print_all_deadlocks = OFF -- Whether to log all Deadlocks.
-- If you are plagued with Deadlocks, turn this on. Caution: If you have lots of deadlocks, this may write a lot to disk.
( local_infile ) = local_infile = ON
-- local_infile = ON is a potential security issue
( expire_logs_days ) = 0 -- How soon to automatically purge binlog (after this many days)
-- Too large (or zero) = consumes disk space; too small = need to respond quickly to network/machine crash.
(Not relevant if log_bin = OFF)
( long_query_time ) = 5 -- Cutoff (Seconds) for defining a "slow" query.
-- Suggest 2
Abnormally large:
read_buffer_size = 32MB
Acl_database_grants = 780
Acl_proxy_users = 4
Acl_users = 281
Columstore.xml
95% of all memory??
<MemoryCheckPercent>95</MemoryCheckPercent> <!-- Max real memory to limit growth of buffers to -->
<DataFileLog>OFF</DataFileLog>
I guess this is not relevant, since it is commented out??
<!-- enable if you want to limit how much memory may be used for hdfs read/write memory buffers.
<hdfsRdwrBufferMaxSize>8G</hdfsRdwrBufferMaxSize>
-->
Keep in mind that MySQL, other than Columnstore, is consuming a lot of memory:
<TotalUmMemory>25%</TotalUmMemory>
<TotalPmUmMemory>10%</TotalPmUmMemory>

Tuning MySql my.cnf for Many Users

I'm looking for some help tuning our my.cnf file to handle many current users (about 20 orders/minute). This particular site is in WordPress and running Woocommerce.
After a lot of reading online, I've come up with the settings below. The server is Debian 8 with 12 CPUs and 48GB RAM.
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
key_buffer = 2G
max_allowed_packet = 512M
thread_cache_size = 256K
tmp_table_size = 4G
max_heap_table_size = 4G
table_cache = 1024
table_definition_cache = 1024
myisam_recover = FORCE,BACKUP
max_connections = 300
wait_timeout = 120
connect_timeout = 120
interactive_timeout = 120
innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = 1
innodb_buffer_pool_size = 2G
innodb_io_capacity = 1000
innodb_read_io_threads = 32
innodb_thread_concurrency = 0
innodb_write_io_threads = 32
It seems to be running pretty good for now. Any additional thoughts? Thanks for your input!
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
See https://dom.as/tech/query-cache-tuner/
key_buffer = 2G
Key buffer is only for MyISAM. You shouldn't use MyISAM.
max_allowed_packet = 512M
thread_cache_size = 256K
tmp_table_size = 4G
max_heap_table_size = 4G
4G is probably way too high for the tmp table and heap table sizes. Keep in mind multiple threads can create temp tables concurrently.
table_cache = 1024
table_definition_cache = 1024
Probably overkill.
myisam_recover = FORCE,BACKUP
Also used only for MyISAM.
max_connections = 300
What does show global status like 'max_used_connections' say? Is it close to max_connections?
wait_timeout = 120
connect_timeout = 120
interactive_timeout = 120
innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = 1
innodb_buffer_pool_size = 2G
Fine but with 48G of RAM, you can probably increase the buffer pool size.
Run show engine innodb status and look for these lines:
Buffer pool size 131072
Free buffers 0
Database pages 128000
Is your buffer pool always pegged full? If so, increase its size.
What's the total size of your database? You don't need the buffer pool to be larger than the total size of your data+indexes, but large enough to hold the frequently-accessed pages would be good.
select round(sum(data_length+index_length)/1024/1024, 2) as mb
from information_schema.tables where engine='InnoDB'
innodb_io_capacity = 1000
innodb_read_io_threads = 32
innodb_thread_concurrency = 0
innodb_write_io_threads = 32
The io capacity might be greater than the ability of your disks to keep up. You didn't describe what your disk system is.
The io threads is way overkill for a Wordpress site with the traffic you describe. Run show engine innodb status and look for this line:
Pending normal aio reads: [0, 0, 0, 0] , aio writes: [0, 0, 0, 0] ,
If you always see 0's on that line, you don't need more than the default 4 & 4 io threads.

Mysql 5.5 InnoDB INSER/UPDATE very slow

I use mysql 5.5 and centos 7. Some tables are InnoDB in my database and i optimized my.cnf file with i read articles in the internet. I use TRANSACTION and COMMIT. But insert and update very slow you can see.
I can't use MyISAM because these tables always get insert, update and read same time too much.
QUERY PHOTO
My.cnf file
# The following options will be passed to all MySQL clients
[client]
port = 3306
socket = /var/lib/mysql/mysql.sock
[mysqld]
#innodb_force_recovery=6
user = mysql
default-storage-engine = InnoDB
socket = /var/lib/mysql/mysql.sock
pid-file = /var/lib/mysql/mysql.pid
datadir=/var/lib/mysql/
log-error=/var/lib/mysql/server.mysql.err
symbolic-links=0
tmpdir=/var/tmp
skip-external-locking
table_cache = 2000
key_buffer_size=20G
join_buffer_size = 4M
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 1M
myisam_sort_buffer_size=2M
thread_cache_size = 512
query_cache_limit = 1G
query_cache_size = 40M
query_cache_type=1
thread_stack = 256K
tmp_table_size = 128M
max_heap_table_size = 128M
open_files_limit=65535
#thread_concurrency = 10
max_connect_errors=1
connect_timeout=60
interactive_timeout = 60
lock_wait_timeout=60
wait_timeout = 30
max_connections = 1000
slow_query_log=1
long_query_time=1
slow-query-log-file=/var/log/mysql-slow.log
#log-queries-not-using-indexes
innodb_buffer_pool_size=32G
innodb_additional_mem_pool_size=64M
innodb_data_file_path=ibdata1:100M:autoextend
innodb_log_buffer_size=128M
innodb-log-files-in-group = 2
innodb_change_buffering=all
innodb_flush_method=O_DIRECT
innodb_flush_log_at_trx_commit=2
#innodb_thread_concurrency=10
innodb_read_io_threads = 64
innodb_write_io_threads = 64
innodb_file_per_table=1
innodb_lock_wait_timeout = 60
innodb_table_locks=0
innodb_open_files=65535
innodb_io_capacity=2000
#innodb_doublewrite=0
#innodb_support_xa=0
[mysqldump]
max_allowed_packet=2G
quick
[mysql]
no-auto-rehash
[isamchk]
key_buffer = 128M
sort_buffer_size = 4M
read_buffer = 4M
write_buffer = 4M
[myisamchk]
tmpdir=/tmp
key_buffer_size=128M
sort_buffer_size=4M
read_buffer=4M
write_buffer=4M
[mysqlhotcopy]
interactive-timeout
And my server properties are
CPU
Intel(R) Xeon(R) CPU E5-1620 v2 # 3.70GHz
Cores : 8
Cache : 10240KB
RAM
64 GB
Disks
3 x 160 GB SSD
query_cache_limit = 1G -- should be less than the size.
query_cache_size = 40M -- this is reasonable.
key_buffer_size=20G -- too big; change to 6G. Note: only used for MyISAM indexes.
-- if you are not using MyISAM, then drop to 10M
slow_query_log=1 --
What does pt-query-digest point out as the 'worst' 3 queries? Show us them, together with SHOW CREATE TABLE and (if SELECT), EXPLAIN SELECT ....
Its not necessary that you have to tune optimized my.cnf file if your queries working slow. You might have to optimize your queries by proper indexes and joins.

Optimize MySQL setting for AWS EC2 t2.small

I have a web server with Apache and MySQL running on AWS EC2 t2.small with Windows 2012 Server. AWS EC2 t2.small characteristics:
RAM 2 GB (used 65%)
1 CPU 2.50 GHz (used 1%)
Now MySQL process (mysqld.exe) uses 400 MB of RAM (too much for me).
MySQL current settings are (my.ini):
key_buffer = 16M
max_allowed_packet = 16M
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M
tmp-table-size = 32M
max-heap-table-size = 32M
max-connections = 500
thread-cache-size = 50
open-files-limit = 65535
table-definition-cache = 1024
table-open-cache = 2048
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
innodb-log-files-in-group = 2
innodb-log-file-size = 64M
innodb-flush-log-at-trx-commit = 1
innodb-file-per-table = 1
innodb_buffer_pool_size = 128M
innodb_additional_mem_pool_size = 2M
innodb_log_file_size = 5M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50
Database is formed by 20 InnoDB tables and they are composed with 5/10 columns. The server has a low traffic.
How can I optimize my settings to be suitable with EC2 t2.small (2GB RAM)?
You have innodb_buffer_pool_size twice in your config. It should be with underscores, but check which one gets used with:
show variables like 'innodb_buffer_pool_size';
You could try halving innodb_buffer_pool_size and query_cache_size. Try if performance is ok with query_cache_size=0 too.