In the below status i have opened files count to be '95349'.
this value is increasing rapidly.
mysql> show global status like 'open_%';
Open_files = 721
Open_streams = 0
Open_table_definitions = 706
Open_tables = 741
Opened_files = 95349
Opened_table_definitions = 701
Opened_tables = 2851
also see this.
mysql>show variables like '%open%';
have_openssl = DISABLED
innodb_open_files = 300
open_files_limit = 8502
table_open_cache = 4096
and
max_connection = 300
is there any relation to open files and opened files. will there be any performance issues because of increasing opened_files value. This is a server of 8 GD RAM and 500 GB hardisk with processor: Intel(R) Xeon(R) CPU E3-1220 V2 # 3.10GHz. It is a dedicated mysql server.
here for the command
ulimit -n;
1024 was the count
the server is hanging often. using some online tools i have optimised some parameters already. need to know what else should be optimized ? in what case the opened files count will reduce? is it necessary that opened files count should be with in some limit. if so how to find the appropriate limit for my server. if am not clear some where please help me by asking more questions.
Opened_files is a counter of how many times you have opened a table since the last time you restarted mysqld (see status variable Uptime for the number of seconds since last restart).
Open_files is not a counter; it's the current number of open files.
If your Opened_files counter is increasing rapidly, you may be able to gain improvement to performance by increasing the size of the table_open_cache.
For some tips on the performance implications of this variable (and some cautions about setting it too high), see:
http://www.mysqlperformanceblog.com/2009/11/16/table_cache-negative-scalability/ (the problem described there seems to be solved finally in MySQL 5.6)
Re your comments:
You misunderstand the purpose of the counter. It always increases. It counts the number of times a particular operation has occurred since the last restart of mysqld. In this case, opening a file for a table.
Having a high value in a counter isn't necessarily a problem. It could mean simply that your mysqld has been running for many days or weeks without a restart. So you have to look at that number compared to your Uptime (that is, MySQL status variable Uptime, not Linux uptime).
What is more meaningful is the rate of increase of a counter, that is how fast does it grow in a given interval of time. That could indicate that you are re-opening tables rapidly.
Normally, MySQL shouldn't have to re-open tables, because it retains an open table handle for each table. But it can only have a finite number of those. That's what table_open_cache is for. In your case, your MySQL instance can "remember" that it has already opened up to 4096 tables at a time. If you need another table opened, it closes one of the file descriptors and opens the table you requested.
So if you have many thousands of tables (or partitions of tables) and you access a wide variety of them rapidly, you could see a lot of turnover in that table open cache. That would be indicated by the counter Opened_tables increasing rapidly.
Therefore sizing the table_open_cache higher means that MySQL can retain more open table handles, and possibly decrease the rate of turnover.
SO the solution is either to increase my hardware (especially RAM) so that i will be able to increase the table_open_cache beyond 4096 or to optimize the query.
Related
upon increasing total number of pool connection from application to MySQL 40000 getting this error ER_CANT_CREATE_THREAD.
MySQL conf file looks like this
innodb_buffer_pool_size=100G # from 128M to use half of your RAM for data/index storage
innodb_buffer_pool_instances=32 # from 1 to reduce mutex contention
innodb_lru_scan_depth=1024 # from 1024 to conserve 90% of CPU cycles used for function
max_connections=65536
max_prepared_stmt_count=204800
max_connect_errors=100000
thread_cache_size=8192
innodb_log_buffer_size = 256M
query_cache_size = 0
innodb_read_io_threads = 64
innodb_write_io_threads = 64
innodb_thread_concurrency = 0
innodb_io_capacity = 2000
sql_mode=STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
max_connections=65536 is unreasonably, perhaps dangerously, high. Lower it to 100.
If 90000 connections are active at once, MySQL will meltdown. It is better to avoid too many connections than to let them stumble over each other. (Consider what would happen if you let 10000 people into a grocery store at the same time. Traffic would be so clogged up that people might take an hour to buy just one item.)
If your IoT action is a quick connect-insert-disconnect, that should take very few milliseconds.
If the connections came in very smoothly and take 10ms elapsed each, they could handle 90K per minute with only max_connections=15.
Benchmarks have shown that MySQL gets bottlenecked at not much more than the number of CPU cores that you have.
So, by limiting the number of current active connections to max_connections=100 should be a safe compromise.
Set back_log to a higher number, maybe 1000. (I do not have a feel for what a good number is.) The idea is that delaying the connection is better than letting it in, only to be stalled.
I am confident that MySQL can handle 90K IoT inserts per minute, but only if you take the advice here.
You mentioned "connection pooling". The 100 and 1000 should be moved back into the pooling layer. Or get rid of the layer. (It is hard to say which would be better.)
Please lower innodb_read_io_threads and innodb_write_io_threads to 4. Those variables should be increased for docker environments.
If you want more than 4 threads each, make sure there is enough RAM allocated.
I searched lot of topics, but couldn't find any information about how to find which query consumes more CPU time.
I'm using MySQL 8.0 and MySQL Workbench.
I can find there average and total execution time of queries, but I'm not sure is it an indicator of CPU usage. Queries can take a longer time to execute because of low memory and additional reads from disk.
Suggestions to consider for your my.ini [mysqld] section to improve your instance performance,
thread_cache_size=100 # from 10 to reduce threads_created count of 34,663
innodb_flushing_avg_loops=5 # from 30 to reduce loop delay and reduce innodb_buffer_pool_pages_dirty 1,660 in this STATUS report
max_connections=100 # from 151 to reduce RAM footprint, max_used_connections was 31
innodb_io_capacity=5000 # from 200 stop/start required
if you have innodb_io_capacity_max in your my.ini, REMOVE it for new max of 10000
innodb_open_file=-1 # from 300 for auto calc value = table_open_cache current value of 2000
For additional suggestions, view my profile, Network profile for contact info and get in touch by email or Skype, please.
Congratulations on RELEASING RESOURCES with proper use of com_dealloc_sql and com_stmt_close coding technique.
I have spent several weeks crunching on this to no avail, so I'm hopeful you may be able to help. Generally, I have an update query that takes forever to run (I've given up after 12 hours). To knock the obvious out of the way, I have an index on the columns. Also, I am totally self-taught on MYSQL, so I may need additional clarification on data / processes etc. This DB is for my personal use, offline. Said another way... this is not my day job. While I enjoy MYSQL, I am not a super-user.
First, my system specs...
Laptop Samsung QX410
Windows 7, 64 bit
Intel i5, M 480 # 2.67 GHz
RAM: 8 GB (7.79 available)
WAMP 2.5 with MYSQL v5.6.17
Tables are INNODB
MYSQL set up:
' The MySQL server
[wampmysqld]
port = 3306
socket = /tmp/mysql.sock
key_buffer_size = 512M
max_allowed_packet = 32M
sort_buffer_size = 512K
net_buffer_length = 32K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M
basedir=c:/wamp/bin/mysql/mysql5.6.17
log-error=c:/wamp/logs/mysql.log
datadir=c:/wamp/bin/mysql/mysql5.6.17/data
' Uncomment the following if you are using InnoDB tables
innodb_data_home_dir = C:\mysql\data/
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = C:\mysql\data/
innodb_log_arch_dir = C:\mysql\data/
' You can set .._buffer_pool_size up to 50 - 80 %
' of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 4000M
innodb_additional_mem_pool_size = 32M
' Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 512M
innodb_log_buffer_size = 256M
innodb_flush_log_at_trx_commit = 0
innodb_lock_wait_timeout = 50
Issue in more detail:
I have two tables Trade_List and Cusip_Table and am trying to populate one column in Trade_List (I need to pre-populate this value, since many queries will be run against it).
Trade_List has 11 columns, two of which are relevant.
CUSIP (varchar 45) - generally this is a 9 digit alpha-numeric number.
TICKER (varchar 45) - generally this is 10 letters or less. I want to populate this.
This table has roughly 10 million rows.
I have removed all indices from this table except one on CUSIP.
Cusip_Table has 5 columns, two of which are relevant.
CUSIP (varchar 45) - generally this is a 9 digit alpha-numeric number.
TICKER (varchar 45) - generally this is 10 letters or less. This is already populated.
This table has roughly 70,000 rows.
I have an index 'CTDuplicateCheck' on (Cusip, Ticker).
When I run...
Select A.cusip, B.ticker
From Trade_list A, Cusip_table B
Where A.cusip = B.cusip;
... MYSQL indicates that the query takes about 13 seconds, but in reality it seems to take about a minute, so I ran profiling on it...
starting 0.000093
checking permissions 0.000006
checking permissions 0.000005
Opening tables 0.000041
init 0.000037
System lock 0.000013
optimizing 0.000015
statistics 0.000041
preparing 0.000030
executing 0.000002
Sending data 10.982211
end 0.000014
query end 0.000010
closing tables 0.000018
freeing items 0.000070
logging slow query 0.000004
cleaning up 0.000019
I don't know what any of this means, but 10 seconds for sending data seems reasonable (the return set is ~9M rows.
Just for kicks, and to make sure the index is working, I ran an 'explain' (shown below). I think this says that my index is working correctly.
1 SIMPLE B index CTDuplicateCheck CTDuplicateCheck 96 53010 Using where; Using index
1 SIMPLE A ref TL1Cusip TL1Cusip 48 13f_master_data.B.CUSIP 154 Using index
**NOTE: 13f_Master_Data is the name of the database.
At any rate, when I run the same query, but change it to an update, everything falls apart and it will not complete. I would expect things to run a bit slower, but 12 hours +? I just can't imagine that this is normal for an update query that touches 9M rows. The original INSERT took less than an hour, and the select takes less than a minute. Code for the update is below...
Update Trade_list A, Cusip_table B
Set A.ticker = B.ticker
Where A.cusip = B.cusip;
Stuff I have tried:
Removed almost all index's from Trade_List. I left one in on CUSIP.
Upgraded RAM from 4 GB to 8 GB. This did nothing. Upon further investigation, my CPU and RAM are not limiting factors. CPU generally sits around 30%, RAM never gets above 5GB utilized. This leads me to believe that the issue is I/O. Is it possible MYSQL is doing a full table-scan? Why would it not utilize the index?
Changed all types of memory allocations per http://www.percona.com/blog/2013/09/20/innodb-performance-optimization-basics-updated/ and https://rtcamp.com/tutorials/mysql/mysqltuner/ and http://www.percona.com/blog/2006/09/29/what-to-tune-in-mysql-server-after-installation/. As far as I can tell, this did nothing. Again, I don't think the limiting factor is memory available. Also, I have no doubt that my memory allocations (shown above) are completely screwed up. I had no idea what I was doing and changed things all over the place. That said, I don't think the memory changes made anything any worse.
Upgraded MYSQL and Wamp versions (did nothing).
Read and learned a lot about index's. Candidly, I know very little about MYSQL, and am totally self-taught. I have learned a lot about memory on this foray, but need someone to step in and tell me where I have totally derailed. This database is for my own offline analysis. I am the only user.
I am happy to provide additional information that may help to analyze the issue. I'm at a total loss on this. The only thing I can come up with is that the system is doing full scans row by row... for every look-up in the update. Though, this could be completely false.
Your thoughts are much appreciated.
PM
I was wondering if there's a way to decrease the opened files in mysql.
Details :
mysql 5.0.92
engine used : MyISAM
SHOW GLOBAL STATUS LIKE 'Opened_tables' : 150K
SHOW VARIABLES LIKE '%open%' :
open_files_limit 200000
table_open_cache 40000
Solutions tried :
restart server : it works the opened tables counter is 0 but this isn't a good solution from my pov since you will need a restart every week because the counter will increase fast
FLUSH TABLES : like the mysql doc said it should force all tables in use to close but this doesn't happen
So any thoughts on this matter?
Generally, many open tables are nothing to worry about. If you come close to OS limits, you can increase this limits in the kernel settings:
How do I change the number of open files limit in Linux?
MySQL opens tables for each session independently to have better concurrency.
The table_open_cache and max_connections system variables affect the maximum number of files the server keeps open. If you increase one or both of these values, you may run up against a limit imposed by your operating system on the per-process number of open file descriptors. Many operating systems permit you to increase the open-files limit, although the method varies widely from system to system.
In detail, this is explained here
http://dev.mysql.com/doc/refman/5.5/en/table-cache.html
EDIT
To verify your assumption you could decrease max_connections and table_open_cache temporarily by SET GLOBAL table_open_cache := newValue.
The value can be adjusted dynamically without a server restart.
Prior MySQL 5.1 this variable is called table_cache
What I was trying to tell, is, that decreasing this value will probably even have a negative impact on performance in terms of less possible concurrent reads (queue get's longer), instead you should try to increase the OS limit and increase max_open_files, but maybe I just don't see the point here
I'm trying to tune my Magento DB for optimal performance.
I'm running nginx, php-fpm and mysql on a 4GB RAM, 8CPU core virtual machine with 4GB of RAM.
I've ran the Mysql Tuning Primer and everything looks good apart from my Table Cache:
TABLE CACHE
Current table_open_cache = 1000 tables
Current table_definition_cache = 400 tables
You have a total of 2510 tables
You have 1000 open tables.
Current table_cache hit rate is 3%
, while 100% of your table cache is in use
You should probably increase your table_cache
You should probably increase your table_definition_cache value.
and from mysqltuner
[!!] Table cache hit rate: 9% (1K open / 10K opened)
[!!] Query cache efficiency: 0.0% (0 cached / 209 selects)
The relevant settings from the my.cnf file:
table_cache = 1000
query_cache_limit = 1M
query_cache_size = 64M
The thing is, no matter what I increase my table_cache to - it seems to be consumed almost immediately. Is this normal for Magento? It seems abnormally high?
Does anyone have any tips about what I can do to improve this?
Thanks,
Ed
Check your MySQL config's query cache type setting:
http://dev.mysql.com/doc/refman/5.1/en/server-system-variables.html#sysvar_query_cache_type
If you set it to 0 or 2 then it will either not cache any queries or only cache the ones that you have specifically asked to cache. That means Magento would have to explicitly ask for cached query results (I'm not sure it does that). If you set it to 1 then it will cache all queries except those that explicitly ask for no query cache.
Table cache refers to potential open file pointers. It could be consumed rather quickly, and will just roll off unused entries as needed. From MySQL's documentation:
The table_cache and max_connections system variables affect the
maximum number of files the server keeps open. If you increase one or
both of these values, you may run up against a limit imposed by your
operating system on the per-process number of open file descriptors.
Many operating systems permit you to increase the open-files limit,
although the method varies widely from system to system. Consult your
operating system documentation to determine whether it is possible to
increase the limit and how to do so.
table_cache is related to max_connections. For example, for 200
concurrent running connections, you should have a table cache size of
at least 200 * N, where N is the maximum number of tables per join in
any of the queries which you execute. You must also reserve some extra
file descriptors for temporary tables and files.
Make sure that your operating system can handle the number of open
file descriptors implied by the table_cache setting. If table_cache is
set too high, MySQL may run out of file descriptors and refuse
connections, fail to perform queries, and be very unreliable. You also
have to take into account that the MyISAM storage engine needs two
file descriptors for each unique open table. You can increase the
number of file descriptors available to MySQL using the
--open-files-limit startup option to mysqld. See Section C.5.2.18, “'File' Not Found and Similar Errors”.
The cache of open tables is kept at a level of table_cache entries.
The default value is 64; this can be changed with the --table_cache
option to mysqld. Note that MySQL may temporarily open more tables
than this to execute queries.
MySQL closes an unused table and removes it from the table cache under
the following circumstances:
When the cache is full and a thread tries to open a table that is not
in the cache.
When the cache contains more than table_cache entries and a table in
the cache is no longer being used by any threads.
When a table flushing operation occurs. This happens when someone
issues a FLUSH TABLES statement or executes a mysqladmin flush-tables
or mysqladmin refresh command.
When the table cache fills up, the server uses the following procedure
to locate a cache entry to use:
Tables that are not currently in use are released, beginning with the
table least recently used.
If a new table needs to be opened, but the cache is full and no tables
can be released, the cache is temporarily extended as necessary. When
the cache is in a temporarily extended state and a table goes from a
used to unused state, the table is closed and released from the cache.