What is the recommended configuration for server dedicated 128 GB of RAM
my.cnf
[mysqld]
datadir="/home/mysql"
default-storage-engine=MyISAM
innodb_file_per_table=1
max_allowed_packet=268435456
innodb_buffer_pool_size = 94G
innodb_buffer_pool_instances = 12
innodb_open_files=20000
innodb_io_capacity=10000
innodb_io_capacity_max=25000
innodb_read_io_threads=8
innodb_write_io_threads=8
innodb_flush_log_at_trx_commit=2
innodb_max_dirty_pages_pct = 90
open_files_limit=100000
interactive_timeout=60
wait_timeout=60
max_connections=20000
max_connect_errors=20000
tmp_table_size=1G
max_heap_table_size=1G
# MyISAM
key_buffer_size = 1G
join_buffer_size = 10M
sort_buffer_size=256K
read_buffer_size=64K
read_rnd_buffer_size=256K
slow-query-log
table_open_cache = 5000
query_cache_type = 1
query_cache_limit = 10M
query_cache_min_res_unit = 1M
query_cache_size = 256M
thread_cache_size = 4
skip_name_resolve=ON
MySQLTuning:
>> MySQLTuner 1.4.0 - Major Hayden <major#mhtx.net>
>> Bug reports, feature requests, and downloads at http://mysqltuner.com/
>> Run with '--help' for additional options and output filtering
[!!] Currently running unsupported MySQL version 10.0.17-MariaDB-log
[OK] Operating on 64-bit architecture
-------- Storage Engine Statistics -------------------------------------------
[--] Status: +ARCHIVE +Aria +BLACKHOLE +CSV +FEDERATED +InnoDB +MRG_MyISAM
[--] Data in MyISAM tables: 603M (Tables: 470)
[--] Data in InnoDB tables: 93G (Tables: 563)
[--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 52)
[--] Data in MEMORY tables: 60K (Tables: 3)
[!!] Total fragmented tables: 216
-------- Performance Metrics -------------------------------------------------
[--] Up for: 16h 58m 38s (22M q [366.226 qps], 920K conn, TX: 8B, RX: 1B)
[--] Reads / Writes: 35% / 65%
[--] Total buffers: 96.1G global + 10.8M per thread (20000 max threads)
[!!] Maximum possible memory usage: 307.9G (244% of installed RAM)
[OK] Slow queries: 0% (0/22M)
[OK] Highest usage of available connections: 0% (106/20000)
[OK] Key buffer size / total MyISAM indexes: 1.0G/589.3M
[OK] Key buffer hit rate: 99.7% (28M cached / 74K reads)
[OK] Query cache efficiency: 22.8% (1M cached / 6M selects)
[!!] Query cache prunes per day: 1568144
[OK] Sorts requiring temporary tables: 0% (0 temp sorts / 46K sorts)
[OK] Temporary tables created on disk: 5% (3K on disk / 66K total)
[OK] Thread cache hit rate: 98% (14K created / 920K connections)
[OK] Table cache hit rate: 104% (1K open / 1K opened)
[OK] Open file limit used: 1% (1K/100K)
[OK] Table locks acquired immediately: 99% (14M immediate / 14M locks)
[OK] InnoDB buffer pool / data size: 94.0G/93.2G
[OK] InnoDB log waits: 0
i have a big databases my cpu usage:
http://i.stack.imgur.com/4azch.png
http://i.stack.imgur.com/mm7oZ.png
and have 5k users
the querys :
SELECT * FROM users ORDER BY RAND();
is very slow have +900k rows in users
The value is bogus. It is computed from a combination of worst case situations. And it does not even include all the cases!
This, however, is ludicrous:
max_connections = 20000
[OK] Highest usage of available connections: 0% (106/20000)
Rarely is even 2000 a sensible number. Lower it. If you do hit max_connections, then there are problems elsewhere. Anyway, notice that Max_used_connections is only 106. (106 is kinda high, but not necessarily a problem.)
tmp_table_size=1G
max_heap_table_size=1G
If you were running 20K SELECTs, each of which needed 3 tmp tables, that could add up to 60TB of RAM being needed! So, lower these two settings as extra protection against blowing out RAM. Swapping is terrible for MySQL.
[!!] Total fragmented tables: 216
Ignore that; tables are often fragmented; taking action is not worth it.
[OK] Slow queries: 0% (0/22M)
Lower long_query_time to, say, 2 (seconds).
[!!] Query cache prunes per day: 1568144
Sounds like turning the Query cache ON (1) is hurting.
[OK] Key buffer size / total MyISAM indexes: 1.0G/589.3M
Seriously consider changing to InnoDB.
Something?
[mysqld]
datadir = "/home/mysql"
default-storage-engine = MyISAM
innodb_file_per_table = 1
max_allowed_packet = 268435456
innodb_buffer_pool_size = 94G
innodb_buffer_pool_instances = 12
innodb_open_files = 20000
innodb_io_capacity = 10000
innodb_io_capacity_max = 25000
innodb_read_io_threads = 8
innodb_write_io_threads = 8
innodb_flush_log_at_trx_commit = 2
innodb_max_dirty_pages_pct = 90
open_files_limit = 100000
interactive_timeout = 60
wait_timeout = 60
max_connections = 20000
max_connect_errors = 20000
tmp_table_size = 60M
max_heap_table_size = 60M
# MyISAM
key_buffer_size = 1G
join_buffer_size = 10M
sort_buffer_size = 256K
read_buffer_size = 64K
read_rnd_buffer_size = 256K
slow-query-log
table_open_cache = 5000
query_cache_type = 1
query_cache_limit = 10M
query_cache_min_res_unit = 1M
query_cache_size = 512M
log_queries_not_using_indexes = 0
long_query_time = 2
thread_cache_size = 4
skip_name_resolve = ON
Related
This question was migrated from Stack Overflow because it can be answered on Database Administrators Stack Exchange.
Migrated 18 days ago.
I use a VPS with 56 CPU cores and 64 GB of memory, a with MySQL database, but the CPU usage is always high for my apps - user 10.000.
This is in my mysqld.cnf. What's wrong with my settings?
innodb_buffer_pool_size=50G
innodb_change_buffering=all
innodb_log_file_size = 3125M
innodb_log_buffer_size = 3125M
innodb_file_per_table = ON
innodb_log_files_in_group =4
innodb_flush_method = O_DSYNC
innodb_lock_wait_timeout = 50
innodb_buffer_pool_instances = 50
innodb_flush_log_at_trx_commit = 0
innodb_thread_concurrency=112
innodb_stats_on_metadata = OFF
innodb_thread_sleep_delay=1000
innodb_purge_threads=8
innodb_read_io_threads = 32
innodb_write_io_threads = 32
innodb_io_capacity = 5000
innodb_io_capacity_max=15000
key_buffer_size = 4G
max_allowed_packet = 1G
thread_stack = 5M
sort_buffer_size = 50M
read_buffer_size = 50M
read_rnd_buffer_size = 20M
myisam_sort_buffer_size = 20M
join_buffer_size = 1G
myisam-recover-options = BACKUP
max_connections = 1000
max_user_connections = 500
thread_cache_size = 1000
query_cache_limit = 0
query_cache_size = 0
long_query_time = 10
expire_logs_days = 5
max_binlog_size = 200M
innodb_log_buffer_size - I would keep that under 1% of RAM. (innodb_log_file_size looks OK.)
innodb_log_files_in_group there is some evidence that more that "2" degrades performance.
innodb_flush_method -- this depends on MySQL version and disk filesystem type. (Your choice is rarely picked by other DBAs.)
Did you also set innodb_adaptive_max_sleep_delay? I see that MariaDB abandoned innodb_thread_sleep_delay in 10.5 -- either it was obviated by some improvement, or it was deemed not useful. I don't know MySQL's stand on the two.
innodb_io_capacity_max=15000 -- Do you have a super-duper SSD?
key_buffer_size = 4G: Unless you are using MyISAM (you should not be), set this to only 50M.
thread_stack -- Leave at the default value
long_query_time = 1 to make better use of the slowlog. After a day, analyze it with pt-query-digest. SlowLog
But the real way to deal with high CPU (or Load Average) is to find the 'worst' queries (according to the slowlog) and work on improving them. (Random tuning of settings can only get you into trouble.)
For further analysis of the settings, please provide SHOW GLOBAL STATUS; and SHOW VARIABLES; after running at least a day. ( http://mysql.rjweb.org/doc.php/mysql_analysis#tuning )
Intro:
I have a Debian 8 VPS with SSD and 512MB RAM (1024MB Burst) and use it only with MySQL. I have turned off all unnecessary services and gave every resources on db and system. I have 5 workstations only. The ping and network are stable.
The main problem is that if I do 10 consecutive queries like this:
(SELECT IFNULL(SUM(Qtty), 0) FROM operations
WHERE blablabla
FROM (goods LEFT JOIN store ON goods.ID = store.GdID)
WHERE Deleted <> -1
GROUP BY goods.ID;
Randomly 5-6 of them are executed for 2s, but the rest for 8s or more! I see no reason for this behavior. When i activate slow_query_log, it have just the same query, nothing else.
My InnoDB data is 98MB is there any reason to give more RAM to innodb_buffer_pool_size or 128 is enough?
innodb_log_file_size must be 25% - 50%, which size would be better for me?
tmp dir is tmpfs, so if tmp_table_size is placed there, is there any reason to give it more RAM or is enough to use 1M for example?
table_open_cache - I have about 200 tables, is there any reason to make it 20000 for example, as the mysqltuner advises me?
Can i tuning something else?
my.cnf
skip-external-locking
skip-name-resolve
performance_schema = OFF
thread_stack = 192K
thread_cache_size = 10 #this is managed globally and is equal to max_connections
max_connections = 10
max_connect_errors = 10
connect_timeout = 20
wait_timeout = 20
interactive_timeout = 20
sql_mode = TRADITIONAL
default_storage_engine = InnoDB
innodb_buffer_pool_instances = 1
innodb_buffer_pool_size = 250M
innodb_log_file_size = 100M
innodb_strict_mode = ON
innodb_flush_method = O_DIRECT
innodb_flush_neighbors = 0
innodb_read_io_threads = 4
innodb_write_io_threads = 4
innodb_io_capacity = 2000
innodb_io_capacity_max = 3000
key_buffer = 128K
delay_key_write = ON
tmp_table_size = 10M # 32M = 53% temp
max_heap_table_size = 10M
query_cache_type = 0
query_cache_size = 0
table_open_cache = 5000
table_definition_cache = 3000
Suggestions to consider for your my.cnf [mysqld] section
max_connections=20 # from 10 to affect thread_cache_size
. because 8 threads used by MySQL + 5 concurrent = 13 and you will have a few extra, please
tmp_table_size=16M # from 10M to reduce 41% temp tables created on disk
max_heap_table_size=16M # from 10M should always be = tmp_table_size
innodb_read_io_threads=64 # from 4 to let the ponies run
innodb_write_io_threads=64 # from 4 to push on through
for additional assistance, pls see my Profile, clk on Network Profile for contact info.
Again thanks for the detailed answer and sorry for the delay. Unfortunately, I travel often without internet and contact by Skype is almost impossible.
I noticed something that can help: often, when running even very small queries, the server responds with "ERROR 2006 (HY000): MySQL server has gone away". If i repeat the operation - everything is fine. The problem is on the server, not in db?
The db is about 100MB, so i increase and max_allowed_packet to 512MB, innodb io_threads to 64. I tried with increased net_read_timeout, net_write_timeout, wait_timeout and interactive_timeout to 28800 and there is an improvement.
This is my timeouts now, does it make sense to change something?
connect_timeout 120
delayed_insert_timeout 300
innodb_flush_log_at_timeout 1
innodb_lock_wait_timeout 50
innodb_rollback_on_timeout OFF
interactive_timeout 28800
lock_wait_timeout 31536000
net_read_timeout 30
net_write_timeout 60
slave_net_timeout 3600
thread_pool_idle_timeout 60
wait_timeout 28800
We have a dedicated database server (MariaDB 5.5.31) hosting our database used by our java application (accessing the db via hibernate from another server). While our database itself is using about 12gb of RAM, the operating system (CentOS 6.4) is caching around 25gb additionally.
Can you give me any advice, how we can influence this behaviour?
While most of our 100 tables have less than 1000 entries, we have some tables with above 10 million. These tables experience lots of reads and writes.
These are our db settings:
[server]
[mysqld]
port = 3306
socket = /var/lib/mysql/mysql.sock
character-set-server = utf8
collation-server=utf8_general_ci
skip-external-locking
back_log = 50
max_connections = 100
max_connect_errors = 10
table_open_cache = 2048
max_allowed_packet = 16M
max_heap_table_size = 64M
read_buffer_size = 2M
read_rnd_buffer_size = 16M
sort_buffer_size = 8M
join_buffer_size = 16M
thread_cache_size = 8
thread_concurrency = 8
query_cache_size = 128M
query_cache_limit = 2M
thread_stack = 240K
transaction_isolation = REPEATABLE-READ
tmp_table_size = 64M
binlog_cache_size = 1M
log-bin=mysql-bin
binlog_format=mixed
expire_logs_days = 2
#*** MyISAM Specific options
key_buffer_size = 384M
bulk_insert_buffer_size = 64M
myisam_sort_buffer_size = 128M
myisam_max_sort_file_size = 10G
myisam_repair_threads = 1
myisam_recover
# *** INNODB Specific options ***
innodb_additional_mem_pool_size = 16M
innodb_buffer_pool_size = 10G
innodb_data_home_dir = /var/lib/mysql
innodb_write_io_threads = 8
innodb_read_io_threads = 8
innodb_thread_concurrency = 16
innodb_flush_log_at_trx_commit = 1
innodb_log_buffer_size = 8M
innodb_max_dirty_pages_pct = 90
innodb_lock_wait_timeout = 120
[embedded]
[mysqld-5.5]
[mariadb]
[mariadb-5.5]
If the tables with millions of records are transactional tables then following are the foremost things to be done:
List the child tables for those tables.
Discuss with client and have a backup plan, i.e., only 1/1.5/2 years of data to be kept in the table.
If there is lots of child table's data that can be backed up along with then have 2 options:
a) Have one denormalized table for the backup data
b) Have one table for each can be named original_table_name_bk
There should be a backup process planned periodically say every 3 months, when preferrably during lean period the process will run and move data from transactional table to backup table.
Now you have both historical and current data in your database without impacting performance of transactional operations.
I'm having trouble tuning mysql configuration to maximize the speed of insertion and update queries.
The problem occurs when we have to insert daily data approximately half a million record everyday and it would run for minutes before it completes.
While it performing the job I've checked and found out that it was using less than 5% for CPU and half of memory. My question is how can I increase the speed by maximize mysql to use all available resources.
Thank you.
Performance
Insert/Update is around 2,000-4,000 records per second on both MyISAM and InnoDB tables
Table#1
Engine: MyISAM
Columns : 21
Existed Rows : 5,400,000
Key : One Unique key on 7 columns and One Primary Key
Table#2
Engine: InnoDD
Columns : 14
Existed Rows : 1,500,000
Key : One Primary Key, One Unique Key on 6 columns, Two Indexes
Insert Method
LOAD DATA LOCAL INFILE
Hardware Specifications
2 x Intel Xeon E5-2640v2 2.1GHz, 20M Cache, 7.2GT/s
RAM 16GB
2 x HDD 300GB 15K RPM,6Gbps SAS 2.5
my.cnf Configuration
[mysqld]
local-infile=1
max_connections = 600
max_user_connections=1000
key_buffer_size = 3584M
myisam_sort_buffer_size = 64M
read_buffer_size = 256K
table_open_cache = 5000
thread_cache_size = 384
wait_timeout = 20
connect_timeout = 10
tmp_table_size = 256M
max_heap_table_size = 128M
max_allowed_packet=268435456
net_buffer_length = 16384
max_connect_errors = 10
concurrent_insert = 2
read_rnd_buffer_size = 786432
bulk_insert_buffer_size = 8M
query_cache_limit = 5M
query_cache_size = 1024M
query_cache_type = 1
query_prealloc_size = 262144
query_alloc_block_size = 65535
transaction_alloc_block_size = 8192
transaction_prealloc_size = 4096
max_write_lock_count = 8
log-error
external-locking=FALSE
open_files_limit=50000
#expire-logs-days = 7
innodb_buffer_pool_size = 2024M
innodb_log_buffer_size = 8M
innodb_thread_concurrency = 0
innodb_read_io_threads = 64
innodb_write_io_threads = 64
innodb_log_file_size = 64M
innodb_flush_method = O_DIRECT
sort_buffer_size = 512K
read_rnd_buffer_size = 1M
tmp_table_size = 1G
max_heap_table_size = 512M
[mysqld_safe]
[mysqldump]
quick
max_allowed_packet = 16M
[isamchk]
key_buffer = 384M
sort_buffer = 384M
read_buffer = 256M
write_buffer = 256M
[myisamchk]
key_buffer = 384M
sort_buffer = 384M
read_buffer = 256M
write_buffer = 256M
#### Per connection configuration ####
sort_buffer_size = 1M
join_buffer_size = 1M
thread_stack = 192K
MysqlTuner Results
-------- Storage Engine Statistics -------------------------------------------
[--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MRG_MYISAM
[--] Data in MyISAM tables: 5G (Tables: 306)
[--] Data in InnoDB tables: 269M (Tables: 441)
[--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 52)
[!!] Total fragmented tables: 34
-------- Security Recommendations -------------------------------------------
[OK] All database users have passwords assigned
-------- Performance Metrics -------------------------------------------------
[--] Up for: 1d 12h 58m 9s (4M q [37.247 qps], 70K conn, TX: 21B, RX: 1B)
[--] Reads / Writes: 67% / 33%
[--] Total buffers: 7.0G global + 2.2M per thread (600 max threads)
[OK] Maximum possible memory usage: 8.3G (53% of installed RAM)
[OK] Slow queries: 0% (72/4M)
[OK] Highest usage of available connections: 2% (15/600)
[OK] Key buffer size / total MyISAM indexes: 3.5G/1.5G
[OK] Key buffer hit rate: 99.6% (304M cached / 1M reads)
[OK] Query cache efficiency: 97.0% (3M cached / 3M selects)
[OK] Query cache prunes per day: 11
[!!] Sorts requiring temporary tables: 14% (1K temp sorts / 9K sorts)
[!!] Temporary tables created on disk: 28% (1K on disk / 4K total)
[OK] Thread cache hit rate: 99% (15 created / 70K connections)
[OK] Table cache hit rate: 74% (1K open / 1K opened)
[OK] Open file limit used: 1% (831/50K)
[OK] Table locks acquired immediately: 99% (755K immediate / 755K locks)
[OK] InnoDB buffer pool / data size: 2.0G/269.9M
[OK] InnoDB log waits: 0
-------- Recommendations -----------------------------------------------------
General recommendations:
Run OPTIMIZE TABLE to defragment tables for better performance
Temporary table size is already large - reduce result set size
Reduce your SELECT DISTINCT queries without LIMIT clauses
Variables to adjust:
sort_buffer_size (> 512K)
read_rnd_buffer_size (> 1M)
query_cache_size = 1024M
query_cache_type = 1
Those are bad. Everytime you write something to a table, the Query cache needs to have all references to that table removed. 1G is much too big; 50M is what I recommend. Also, unless you have demonstrated a need for the Query cache, I recommend turning it OFF.
On the other hand, "Query cache efficiency: 97.0% (3M cached / 3M selects)" says that you are using the QC, and it is effective. So perhaps you should leave it on, but shrink the size.
As for loading -- Are you 'replacing' the table? Or adding to a table. If you are replacing, then load into a new table, then RENAME TABLE to put it into place.
tmp_table_size = 1G
max_heap_table_size = 512M
These are dangerously high. If multiple threads needed tmp tables at the same time, you could run out of RAM. Put them back to the defaults.
"Temporary tables created on disk: 28%" cannot necessarily be improved by increasing those settings. If there are TEXT or BLOB columns, tmp tables will go to disk. If you like, show us SHOW CREATE TABLE and the naughty SELECTs.
"Run OPTIMIZE TABLE to defragment tables for better performance" -- That tool always says that. It is almost always bogus advice.
Are you loading only via LOAD DATA? You also mentioned UPDATE; please elaborate.
"5% for CPU" -- How many 'cores' do you have? Keep in mind that one MySQL connection will use only one CPU core.
"half of memory" -- That's bogus. MyISAM is using some of the other half for caching data. And nothing else can make use of the space.
Here's a potential optimization (for LOAD DATA): Sort the data by the PRIMARY KEY before doing LOAD DATA. Please provide SHOW CREATE TABLE; there could be further tips in this area.
Do you delete 'old' data? Is that time-based? If so, let's talk about PARTITIONing.
today i was optimizing my mariadb since my website was running too slow
My machine is a Centos 7 , 4 gbs ram 3 cpu
i runned a script called mysql_tuner.pl and the results were:
-- MYSQL PERFORMANCE TUNING PRIMER --
- By: Matthew Montgomery -
MySQL Version 5.5.40-MariaDB x86_64
Uptime = 0 days 0 hrs 0 min 12 sec
Avg. qps = 1
Total Questions = 16
Threads Connected = 1
Warning: Server has not been running for at least 48hrs.
It may not be safe to use these recommendations
To find out more information on how each of these
runtime variables effects performance visit:
http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html
Visit http://www.mysql.com/products/enterprise/advisors.html
for info about MySQL's Enterprise Monitoring and Advisory Service
SLOW QUERIES
The slow query log is NOT enabled.
Current long_query_time = 10.000000 sec.
You have 0 out of 37 that take longer than 10.000000 sec. to complete
Your long_query_time seems to be fine
BINARY UPDATE LOG
The binary update log is NOT enabled.
You will not be able to do point in time recovery
See http://dev.mysql.com/doc/refman/5.5/en/point-in-time-recovery.html
WORKER THREADS
Current thread_cache_size = 0
Current threads_cached = 0
Current threads_per_sec = 1
Historic threads_per_sec = 1
Your thread_cache_size is fine
MAX CONNECTIONS
Current max_connections = 151
Current threads_connected = 1
Historic max_used_connections = 1
The number of used connections is 0% of the configured maximum.
You are using less than 10% of your configured max_connections.
Lowering max_connections could help to avoid an over-allocation of memory
See "MEMORY USAGE" section to make sure you are not over-allocating
INNODB STATUS
Current InnoDB index space = 110 M
Current InnoDB data space = 1.39 G
Current InnoDB buffer pool free = 71 %
Current innodb_buffer_pool_size = 128 M
Depending on how much space your innodb indexes take up it may be safe
to increase this value to up to 2 / 3 of total system memory
MEMORY USAGE
Max Memory Ever Allocated : 274 M
Configured Max Per-thread Buffers : 419 M
Configured Max Global Buffers : 272 M
Configured Max Memory Limit : 691 M
Physical Memory : 4.00 G
Max memory limit seem to be within acceptable norms
KEY BUFFER
No key reads?!
Seriously look into using some indexes
Current MyISAM index space = 58 M
Current key_buffer_size = 128 M
Key cache miss rate is 1 : 0
Key buffer free ratio = 81 %
Your key_buffer_size seems to be fine
QUERY CACHE
Query cache is supported but not enabled
Perhaps you should set the query_cache_size
SORT OPERATIONS
Current sort_buffer_size = 2 M
Current read_rnd_buffer_size = 256 K
No sort operations have been performed
Sort buffer seems to be fine
JOINS
./mysql_tuner.pl: line 402: export: `2097152': not a valid identifier
Current join_buffer_size = 132.00 K
You have had 0 queries where a join could not use an index properly
Your joins seem to be using indexes properly
OPEN FILES LIMIT
Current open_files_limit = 1024 files
The open_files_limit should typically be set to at least 2x-3x
that of table_cache if you have heavy MyISAM usage.
Your open_files_limit value seems to be fine
TABLE CACHE
Current table_open_cache = 400 tables
Current table_definition_cache = 400 tables
You have a total of 801 tables
You have 400 open tables.
Current table_cache hit rate is 16%
, while 100% of your table cache is in use
You should probably increase your table_cache
You should probably increase your table_definition_cache value.
TEMP TABLES
Current max_heap_table_size = 16 M
Current tmp_table_size = 16 M
Of 347 temp tables, 9% were created on disk
Created disk tmp tables ratio seems fine
TABLE SCANS
Current read_buffer_size = 128 K
Current table scan ratio = 28 : 1
read_buffer_size seems to be fine
TABLE LOCKING
Current Lock Wait ratio = 0 : 295
Your table locking seems to be fine
so, i realized that i should raise table_open_cache...
even i confirmed throught mysql command line
+--------------------+
| ##table_open_cache |
+--------------------+
| 400 |
+--------------------+
1 row in set (0.00 sec)
MariaDB [(none)]>
ok , so i ran into my.cnf
and edited like this:
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
#table_cache = 1000
#max_open_files = 4000
#max_connections = 800
key_buffer_size = 60M
max_allowed_packet = 1G
table_open_cache = 2000
table_definition_cache = 2000
#sort_buffer_size = 2M
#read_buffer_size = 1M
#read_rnd_buffer_size = 8M
#myisam_sort_buffer_size = 64M
#thread_cache_size = 15
#query_cache_size = 32M
#thread_concurrency = 8
innodb_buffer_pool_size = 2G
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# Recommended in standard MySQL setup
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
[client]
default-character-set=utf8
[mysql]
default-character-set=utf8
but table_open_cache is still 400!
my server is reading all the other variables, except table_open_cache
results after changing the cnf file
TABLE CACHE
Current table_open_cache = 400 tables
Current table_definition_cache = 400 tables
You have a total of 801 tables
You have 400 open tables.
Current table_cache hit rate is 16%
, while 100% of your table cache is in use
You should probably increase your table_cache
You should probably increase your table_definition_cache value.
tried everything, any help?
Thank you
Increase limits by
ulimit -n 2000
then restart server.