I was wondering if there's a way to decrease the opened files in mysql.
Details :
mysql 5.0.92
engine used : MyISAM
SHOW GLOBAL STATUS LIKE 'Opened_tables' : 150K
SHOW VARIABLES LIKE '%open%' :
open_files_limit 200000
table_open_cache 40000
Solutions tried :
restart server : it works the opened tables counter is 0 but this isn't a good solution from my pov since you will need a restart every week because the counter will increase fast
FLUSH TABLES : like the mysql doc said it should force all tables in use to close but this doesn't happen
So any thoughts on this matter?
Generally, many open tables are nothing to worry about. If you come close to OS limits, you can increase this limits in the kernel settings:
How do I change the number of open files limit in Linux?
MySQL opens tables for each session independently to have better concurrency.
The table_open_cache and max_connections system variables affect the maximum number of files the server keeps open. If you increase one or both of these values, you may run up against a limit imposed by your operating system on the per-process number of open file descriptors. Many operating systems permit you to increase the open-files limit, although the method varies widely from system to system.
In detail, this is explained here
http://dev.mysql.com/doc/refman/5.5/en/table-cache.html
EDIT
To verify your assumption you could decrease max_connections and table_open_cache temporarily by SET GLOBAL table_open_cache := newValue.
The value can be adjusted dynamically without a server restart.
Prior MySQL 5.1 this variable is called table_cache
What I was trying to tell, is, that decreasing this value will probably even have a negative impact on performance in terms of less possible concurrent reads (queue get's longer), instead you should try to increase the OS limit and increase max_open_files, but maybe I just don't see the point here
Related
Recently I updated my VPS from 1GB to 4GB memory. I'd hoped that the queries (MYSQL/InnoDB) were running faster with more memory, but unfortunately that's not the case. Does mysql automatically takes more memory when a server has more memory or do I have to change some settings in my.cnf? And if so, what changes should I make?
MySQL will not automatically take the benefit of more memory installed.
In your case (given that you are using InnoDB) you can do at least these to improve the performance of mysql:
increase innodb_buffer_pool_size (default value for this option is 128MB). This defines how much memory is dedicated to mysql innodb to cache its data tables and idexes. Which means if you can allocate more memory mysql will cache more of its data resulting in faster queries (because mysql will look in memory instead of doing I/O operations for data lookup).
Of course you should allocate reasonable amount of memory (not the whole 4G :)) may be not more than 2G. You should try and test it on the server for more accurate result. (read this for more info, before you change this option https://dev.mysql.com/doc/refman/5.7/en/innodb-buffer-pool-resize.html)
increase innodb_buffer_pool_instances. For you case may be 1 or 2 instances are more than enough. (you can read more here: https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_buffer_pool_instances)
But before starting with editing of my.ini do your calculations for your case. Consider your mysql server load, slow queries etc. for more accurate setup of the options in my.ini
In the below status i have opened files count to be '95349'.
this value is increasing rapidly.
mysql> show global status like 'open_%';
Open_files = 721
Open_streams = 0
Open_table_definitions = 706
Open_tables = 741
Opened_files = 95349
Opened_table_definitions = 701
Opened_tables = 2851
also see this.
mysql>show variables like '%open%';
have_openssl = DISABLED
innodb_open_files = 300
open_files_limit = 8502
table_open_cache = 4096
and
max_connection = 300
is there any relation to open files and opened files. will there be any performance issues because of increasing opened_files value. This is a server of 8 GD RAM and 500 GB hardisk with processor: Intel(R) Xeon(R) CPU E3-1220 V2 # 3.10GHz. It is a dedicated mysql server.
here for the command
ulimit -n;
1024 was the count
the server is hanging often. using some online tools i have optimised some parameters already. need to know what else should be optimized ? in what case the opened files count will reduce? is it necessary that opened files count should be with in some limit. if so how to find the appropriate limit for my server. if am not clear some where please help me by asking more questions.
Opened_files is a counter of how many times you have opened a table since the last time you restarted mysqld (see status variable Uptime for the number of seconds since last restart).
Open_files is not a counter; it's the current number of open files.
If your Opened_files counter is increasing rapidly, you may be able to gain improvement to performance by increasing the size of the table_open_cache.
For some tips on the performance implications of this variable (and some cautions about setting it too high), see:
http://www.mysqlperformanceblog.com/2009/11/16/table_cache-negative-scalability/ (the problem described there seems to be solved finally in MySQL 5.6)
Re your comments:
You misunderstand the purpose of the counter. It always increases. It counts the number of times a particular operation has occurred since the last restart of mysqld. In this case, opening a file for a table.
Having a high value in a counter isn't necessarily a problem. It could mean simply that your mysqld has been running for many days or weeks without a restart. So you have to look at that number compared to your Uptime (that is, MySQL status variable Uptime, not Linux uptime).
What is more meaningful is the rate of increase of a counter, that is how fast does it grow in a given interval of time. That could indicate that you are re-opening tables rapidly.
Normally, MySQL shouldn't have to re-open tables, because it retains an open table handle for each table. But it can only have a finite number of those. That's what table_open_cache is for. In your case, your MySQL instance can "remember" that it has already opened up to 4096 tables at a time. If you need another table opened, it closes one of the file descriptors and opens the table you requested.
So if you have many thousands of tables (or partitions of tables) and you access a wide variety of them rapidly, you could see a lot of turnover in that table open cache. That would be indicated by the counter Opened_tables increasing rapidly.
Therefore sizing the table_open_cache higher means that MySQL can retain more open table handles, and possibly decrease the rate of turnover.
SO the solution is either to increase my hardware (especially RAM) so that i will be able to increase the table_open_cache beyond 4096 or to optimize the query.
I'm trying to tune my Magento DB for optimal performance.
I'm running nginx, php-fpm and mysql on a 4GB RAM, 8CPU core virtual machine with 4GB of RAM.
I've ran the Mysql Tuning Primer and everything looks good apart from my Table Cache:
TABLE CACHE
Current table_open_cache = 1000 tables
Current table_definition_cache = 400 tables
You have a total of 2510 tables
You have 1000 open tables.
Current table_cache hit rate is 3%
, while 100% of your table cache is in use
You should probably increase your table_cache
You should probably increase your table_definition_cache value.
and from mysqltuner
[!!] Table cache hit rate: 9% (1K open / 10K opened)
[!!] Query cache efficiency: 0.0% (0 cached / 209 selects)
The relevant settings from the my.cnf file:
table_cache = 1000
query_cache_limit = 1M
query_cache_size = 64M
The thing is, no matter what I increase my table_cache to - it seems to be consumed almost immediately. Is this normal for Magento? It seems abnormally high?
Does anyone have any tips about what I can do to improve this?
Thanks,
Ed
Check your MySQL config's query cache type setting:
http://dev.mysql.com/doc/refman/5.1/en/server-system-variables.html#sysvar_query_cache_type
If you set it to 0 or 2 then it will either not cache any queries or only cache the ones that you have specifically asked to cache. That means Magento would have to explicitly ask for cached query results (I'm not sure it does that). If you set it to 1 then it will cache all queries except those that explicitly ask for no query cache.
Table cache refers to potential open file pointers. It could be consumed rather quickly, and will just roll off unused entries as needed. From MySQL's documentation:
The table_cache and max_connections system variables affect the
maximum number of files the server keeps open. If you increase one or
both of these values, you may run up against a limit imposed by your
operating system on the per-process number of open file descriptors.
Many operating systems permit you to increase the open-files limit,
although the method varies widely from system to system. Consult your
operating system documentation to determine whether it is possible to
increase the limit and how to do so.
table_cache is related to max_connections. For example, for 200
concurrent running connections, you should have a table cache size of
at least 200 * N, where N is the maximum number of tables per join in
any of the queries which you execute. You must also reserve some extra
file descriptors for temporary tables and files.
Make sure that your operating system can handle the number of open
file descriptors implied by the table_cache setting. If table_cache is
set too high, MySQL may run out of file descriptors and refuse
connections, fail to perform queries, and be very unreliable. You also
have to take into account that the MyISAM storage engine needs two
file descriptors for each unique open table. You can increase the
number of file descriptors available to MySQL using the
--open-files-limit startup option to mysqld. See Section C.5.2.18, “'File' Not Found and Similar Errors”.
The cache of open tables is kept at a level of table_cache entries.
The default value is 64; this can be changed with the --table_cache
option to mysqld. Note that MySQL may temporarily open more tables
than this to execute queries.
MySQL closes an unused table and removes it from the table cache under
the following circumstances:
When the cache is full and a thread tries to open a table that is not
in the cache.
When the cache contains more than table_cache entries and a table in
the cache is no longer being used by any threads.
When a table flushing operation occurs. This happens when someone
issues a FLUSH TABLES statement or executes a mysqladmin flush-tables
or mysqladmin refresh command.
When the table cache fills up, the server uses the following procedure
to locate a cache entry to use:
Tables that are not currently in use are released, beginning with the
table least recently used.
If a new table needs to be opened, but the cache is full and no tables
can be released, the cache is temporarily extended as necessary. When
the cache is in a temporarily extended state and a table goes from a
used to unused state, the table is closed and released from the cache.
I'm getting following error when I try to log onto phpMyAdmin.
User ** already has more than 'max_user_connections' active connections
Could anyone let me know how to close these DB connections from MySQL server end?
Thank you for your time!
Read max_connections document to solve your problem
If clients encounter Too many connections errors when attempting to
connect to the mysqld server, all available connections are in use by
other clients.
The permitted number of connections is controlled by the
max_connections system variable. The default value is 151 to improve
performance when MySQL is used with the Apache Web server. To support
more connections, set max_connections to a larger value.
First: Check your current database max_connection variable
SHOW VARIABLES LIKE 'max_connections';
+-----------------+-------+
| Variable_name | Value |
+-----------------+-------+
| max_connections | 151 |
+-----------------+-------+
Then Try to increase the max_connection parameter either with running command like:
SET GLOBAL max_connections = 300;
Or set this parameter in my.cnf that mostly is located at /etc/my.cnf
vi /etc/my.cnf
max_connections = 300
Finally: Restart MySQL service
FYI
you can also check max_user_connections. however, they are related like this:
max_connections set the total connection limit
max_user_connections set limit per user
====
As Sushilzzz asked: can this be caused by low RAM?
Short answer: No
Long Answer: yes, If Ram Size is low and MySQL can't respond as fast as needed there will be many open connections and you can easily hit the max connection.
The estimated number of max connections per 1GB of ram is 100 (if you don't have any other process using ram at the same time). I usually use ~75 for max_connection per 1GB of RAM
RAM max_connection
1GB 70
2GB 150
4GB 300
8GB 500
This happens due to limit specified in the mysql configuration, the system variable max_user_connections.
Solutions
Killing the queries which are stuck at the backend is only a solution I would suggest if it is a SELECT query. Queries that change data, like UPDATE/DELETE/INSERT, are not to be killed.
Secondly, you can use the command mysqladmin processlist to check what is going on inside mysql.
If locking is causing your problem, you can check which engine you are using and change it to another. IBM's SolidDB documentation on table locks might help you. Though there may be another reason for this. (For example, perhaps your queries are taking too long because of an unoptimized query, or the table size is too big, or you have a spammed database).
Your best bet is to increase max_connections. For a MySQL instance serving multiple different web apps (raw php, WordPress, phpBB), you probably want a value of at least 60 for this.
Issue this command and you'll find out how many global connections you have available:
show global variables like '%connections%'
You can find out how many connections are in use at any given moment like this:
show status like '%connected%'
You can find out what each connection is doing like this:
show full processlist
I would try for a global value of at least 100 connections if I were you. Your service provider ought to be able to help you if you don't have access to do this. It needs to be done in the my.cnf file configuration for MySQL. Don't set it too high or you run the risk of your MySQL server process gobbling up all your RAM.
A second approach allows you to allocate those overall connections to your different MySQL users. If you have different MySQL usernames for each of your web apps, this approach will work for you. This approach is written up here. https://www.percona.com/blog/2014/07/29/prevent-mysql-downtime-set-max_user_connections/
The final approach to controlling this problem is more subtle. You're probably using the Apache web server as underlying tech. You can reduce the number of Apache tasks running at the same time to, paradoxically, increase throughput. That's because Apache queues up requests. If it has a few tasks efficiently banging through the queue, that is often faster than lots of tasks because there's less contention. It also requires fewer MySQL connections, which will solve your immediate problem. That's explained here: Restart Mysql automatically when ubuntu on EC2 micro instance kills it when running out of memory
By the way, web apps like WordPress use a persistent connection pool. That is, they establish connections to the MySQL data base, hold them open, and reuse them. If your apps are busy, each connection's lifetime ought to be several minutes.
First, this is a hack, but works, especially on a shared host.
We all have bad "neighbors" sometimes, right?
If you have access to your /etc/ increase the limit from 30 to 50, in your my.cnf or through the information schema.
To ignore the error message the visitor might see, use #mysql_connect().
If there are more than 30 MUCs, use the "or die()" statement to stop the query.
Replace the "or die" message with die(header(location: THIS PAGE)) and be sure to mysql_close();
Yes, it will cause a delay in page loading. But better to load than a white screen of death -or worse error messages that visitors have no understanding of.
It looks like queries stuck on the server. Restart and everything will be ok.
If you are on shared hosting, just contact your hosting provider they will fix it.
I'm namecheap user and they solved it.
In my case 1 have a limit of 10 user connections; And I do not have the right to change the max user connections variable. You can check the amount user connection like so.
show variables like "max_user_connections";
You can set the max amount of user connections like so if you have permission.
SET GLOBAL max_connections = 300;
Else you can view the processes in use like so
Show full processlist;
And kill some of the process with I like so. In you case replace number by a id in previous table Show full processlist;
kill 10254745;
The following is my default production MySQL configuration file (my.cnf) for a pure UTF-8 setup with InnoDB as the default storage engine.
[server]
bind-address=127.0.0.1
innodb_file_per_table
default-character-set=utf8
default-storage-engine=innodb
The setup does the following:
Binds to localhost:3306 (loopback) instead of the default *:3306 (all interfaces). Done to increase security.
Sets up one table space per table. Done to increase maintainability.
Sets the default character set to UTF-8. Done to allow for easy internationalization by default.
Sets the default storage engine to InnoDB. Done to allow for row-level-locking by default.
Assume that you could further improve the setup by adding a maximum of three (3) configuration parameters. Which would you add and why?
An improvement would in this context mean either a performance improvement, a reliability improvement or ease-of-use/ease-of-maintainability increase. You can assume that the machine running the MySQL instance will have 1000 MB of RAM.
To cache more data:
innodb_buffer_pool_size = 512M
If you write lots of data:
innodb_log_file_size = 128M
, to avoid too much log switching.
There is no third I'd add in any case, all other depend.
Allocating more memory than the default of 8M to InnoDB (using innodb_buffer_pool_size) is surely an enhancement. Regarding the value, on a dedicated database server as yours you can set it up to the 80% of your RAM and the higher you set this value, the fewer the interactions with the hard disk will be. Just to give my two cents, I'd like to mention that you can have some performance boost tweaking the value of innodb_flush_log_at_trx_commit, however sacrificing ACID compliance... According to the MySQL manual:
If the value of
innodb_flush_log_at_trx_commit is 0,
the log buffer is written out to the
log file once per second and the flush
to disk operation is performed on the
log file, but nothing is done at a
transaction commit.
So you might loose some data that were not written properly in the database due to a crash or any malfunction. Again according to the MySQL manual:
However, InnoDB's crash recovery is
not affected and thus crash recovery
does work regardless of the value.
So, I would suggest:
innodb_flush_log_at_trx_commit = 0
Finally if you have a high connection rate (i.e. if you need to configure MySQL to support a web application that accesses the database) then you should consider increasing the maximum number of connections to something like 500. But since this is something more or less trivial and well known, so I'd like to emphasize on the importance of back_log to ensure connectivity.
I hope these information will help you optimize your database server.
Increase the innodb buffer pool size, as big as you can practically make it:
innodb_buffer_pool_size=768M
You'll also want some key buffer space for temp tables:
key_buffer_size=32M
Others would depend on what you are doing with the database, but table_cache or query_cache_size would be a couple other potentials.