My opened_tables are increasing rapidly at a rate of approximately 100 tables per 5 seconds.
I have my table_cache set to 10,000, which I believe is pretty high.
MySQL performance on my website is really slow and I believe that is the reason for my site's bottlenecking.
What can I do to slow down opened_tables - I believe this is one of the reasons why my website is performing badly.
Here is mysql's process %'s:
12.5% CPU
2.2% MEM
And my Server load:
Server load 1.70 (26 CPUs)
Memory Used 8.26% (173288 of 2097152)
This is my.cnf:
[mysqld]
skip-innodb
local-infile=0
set-variable = max_connections=200
safe-show-database
max_tmp_tables=1
query_cache_size=256M
query_cache_limit=128M
query_cache_type=1
key_buffer_size=256M
skip-locking
max_allowed_packet = 1M
table_cache = 10000
sort_buffer_size = 1M
read_buffer_size = 768K
read_rnd_buffer_size = 1M
myisam_sort_buffer_size = 32M
thread_cache_size = 20
thread_concurrency = 4
log-slow-queries = /var/log/mysql/mysql-slow.log
long_query_time = 1
interactive_timeout=3
wait_timeout=3
connect_timeout=5
max_tmp_tables=1
That was the problem.
I upped that value (to 100) and Opened_tables has slowed down to hardly any increment.
I hope this helps someone!
EDIT:
Scrap That,
There is a table that is constantly Updating, Deleting and Adding new Tables.
This is to monitor users who are online.
Everytime there is an Update/Insert/Delete, a new table is Opened.
Is this Normal?
Related
I have MySQL 5.6.40 and about 12 GB RAM and 24 cores (but there is also Apache+FPM threads so for MySQL left about 6 GB).
There are mostly MyISAM tables, but there also a few InnoDB.
At begining i had this configuration
key_buffer_size = 256M
sort_buffer_size = 1M
read_buffer_size = 1M
join_buffer_size = 1M
thread_stack = 192K
thread_cache_size = 8
tmp_table_size = 64M
max_heap_table_size = 64M
myisam-recover = BACKUP
max_connections = 200
wait_timeout = 1200
query_cache_type = 1
query_cache_min_res_unit = 2k
query_cache_limit = 4M
query_cache_size = 512M
table_open_cache = 5000
innodb_buffer_pool_size = 2048M
innodb_buffer_pool_instances = 8
Unfortunatelly sometimes for a short moment (only when there are a lot of bulk operations like import or recalculating) there is mysql slowdown. Using RAM is quite stable, but CPU using increases.
Tuning Primmer says that query_cache_size shouldn't be larger than 128MB. But what if I have a lot of RAM, and there is Query cache efficiency: 79.5% (720M cached / 906M selects)? Should I set lower query cache size and limit to increase performance?
Now I've got more RAM (24 GB) and cores (32 cores) and I want to use it for MySQL optimization. So I want to change configuration as below:
key_buffer_size = 256M
sort_buffer_size = 1M
read_buffer_size = 1M
join_buffer_size = 2M
thread_stack = 192K
thread_cache_size = 8
tmp_table_size = 64M
max_heap_table_size = 64M
myisam-recover = BACKUP
max_connections = 300
wait_timeout = 1200
query_cache_type = 1
query_cache_min_res_unit = 2k
query_cache_limit = 4M
query_cache_size = 512M
table_open_cache = 5000
innodb_buffer_pool_size = 2048M
innodb_buffer_pool_instances = 10
join_buffer_size - form 1M to 2M because tuningprimer says that there are not indexed joins, so till i'll found them then there will be larger buffer fot that (final plan is add missing indexes for those joins)
max_connections - from 200 to 300 because tuningprimer says that highest connection usage exceeds max_connection; it also increase memory usage by MySQL
innodb_buffer_pool_instances - from 8 to 10 because i want to allow MySQL to use more available RAM and cores
Questions are:
If those settings makes sense/are optimal?
What about too much query_cache_size and limit when You have a lot of RAM?
Why there sometimes increase CPU load but not RAM?
When any modification to a table occurs, all references to that table are purged from the Query Cache. The bigger the cache, the longer that takes. Hence is it counterproductive to increase the size of the QC. (I say no more than 50M, but that is rather arbitrary.)
Large queries on large tables using MyISAM block all other accesses to the table. This may be causing your performance problem. Changing to InnoDB is likely to decrease the conflicts between connections.
Use the slowlog (with long_query_time = 1) to help you find the slow queries. Then, let's focus on improving them.
When "too many" connections are actively performing queries (see non-Sleep processes in SHOW PROCESSLIST or SHOW GLOBAL STATUS LIKE "Threads_running") the threads will be stumbling over each other and everything will take longer to finish. Your "high CPU" is consistent with this situation.
RAM usage grows until it hits pre-determined limits -- primarily the main caches: innodb_buffer_pool_size and key_buffer_size. After that, it fluctuates slightly as threads, buffers, etc, come and go.
Meanwhile, I/O is heavy if the tables don't fit in cache; light if they do. CPU is heavy for complex queries that hit lots of rows.
For 24GB of RAM (that is mostly available to MySQL) and a mixture of InnoDB and MyISAM, I would use
innodb_buffer_pool_size = 8G
innodb_buffer_pool_instances = 8
key_buffer_size = 2G
query_cache_size = 50M
That advice applies to most versions of MySQL and MariaDB. However, MySQL 8.0 has disabled the QC and is working on getting rid of MyISAM.
For an social sciences research project I'm using MySQL (5.5) on a dedicated Linux server with 8 GB of memory. The data consists of some 30 million records, resulting in a MyISAM source table of about 4GB (with MyISAM since the data is stable and transaction are not useful). My question is this: how can I prevent memory from being an unnecessary bottleneck?
At the current settings only about 20% of memory is ever used, but the right balance of my.ini settings is difficult to find, since many variables are interdependent. How can I allow MySQL from using as much memory as possible (reserving enough to prevent Linux from swapping out).
current settings:
[mysqld]
max_connections = 3
performance_schema=on
default-storage-engine=MYISAM
local-infile=1
myisam_sort_buffer_size = 2048M
key_buffer_size = 2048M
tmp_table_size = 2048M
max_heap_table_size = 2048M
sort_buffer_size = 128M
read_buffer_size = 256M
read_rnd_buffer_size = 128M
join_buffer_size = 512M
thread_stack = 256KB
query_cache_size = 64M
query_cache_limit = 32M
table_open_cache= 256
table_definition_cache = 512
myisam_max_sort_file_size = 75G
Your best bet is to run MySQL Tuner on your database after it been running and in use for a while. No one on SO will be able to give you relevant information for optimizing your memory usage without knowing the shape of your database and usage patterns. MySQL Tuner automates this.
I have a Drupal application which has been running on a single MySQL database server for 12 months, and has been performing relatively well (apart from peak load events). We needed to be able to support much higher spikes than the current DB server allowed, and at 32GB there was not much gain to be had from simply vertically scaling the single DB server.
We decided to set up a new MariaDB Galera cluster with 2x 32GB instances. We matched the configuration as far as possible with the soon-to-be-obselete DB server.
After migrating to the new database servers, we noticed that the CPU usage on those instances was constantly at 100%, and load was steadily increasing. Over the course of 1 hour, load average went from 0.1 to 150.
Initially we thought it might have something to do with the synchronisation between servers, but even with 1 server turned off and no sync occurring the it was still maxing out CPU as long as the web application was making requests to it.
After a lot of experimentation I found that reducing a few of the configuration options had a profound effect on the CPU usage and load. After making the below changes, the load average has stabilised between 4 and 6 on both instances.
The questions
What are some possible reasons for such a dramatic difference in CPU usage between the old and new servers, despite essentially migrating the configuration from the old server?
Load is currently hovering between 4 and 6 (and this is a low traffic period for our website). What should I be looking at to try and reduce this value, and ensure that when the site gets hit with some real traffic it wont fall over?
Config changes
innodb_buffer_pool_instances
Original value: 500 (there are 498 tables total in all databases)
New value: 92
table_cache
Original value: 8
New value: 4
max_connections
Original value: 1000
New value: 400
Current configuration
Here is the full configuration file from one of the servers /etc/mysql/my.cnf
[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock
[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
nice = 0
[mysqld]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
query_cache_type=1
bind-address=0.0.0.0
max_connections = 400
wait_timeout = 600
key_buffer_size = 16M
max_allowed_packet = 16777216
max_heap_table_size = 512M
table_cache = 92
thread_stack = 196608
thread_cache_size = 8
myisam-recover = BACKUP
query_cache_limit = 1048576
query_cache_size = 128M
expire_logs_days = 10
general_log = 0
max_binlog_size = 10485760
server-id = 0
innodb_file_per_table
innodb_buffer_pool_size = 25G
innodb_buffer_pool_instances = 4
innodb_log_buffer_size = 8388608
innodb_additional_mem_pool_size = 8388608
innodb_thread_concurrency = 16
net_buffer_length = 16384
sort_buffer_size = 2097152
myisam_sort_buffer_size = 8388608
read_buffer_size = 131072
join_buffer_size = 131072
read_rnd_buffer_size = 262144
tmp_table_size = 512M
long_query_time = 1
slow_query_log = 1
slow_query_log_file = /var/log/mysql/mysql-slow.log
# Galera Provider Configuration
wsrep_provider=/usr/lib/galera/libgalera_smm.so
#wsrep_provider_options="gcache.size=32G"
# Galera Cluster Configuration
wsrep_cluster_name="xxx"
wsrep_cluster_address="gcomm://xxx.xxx.xxx.107,xxx.xxx.xxx.108"
# Galera Synchronization Congifuration
wsrep_sst_method=rsync
#wsrep_sst_auth=user:pass
# Galera Node Configuration
wsrep_node_address="xxx.xxx.xxx.107"
wsrep_node_name="xxx01"
[mysqldump]
quick
quote-names
max_allowed_packet = 16777216
[isamchk]
key_buffer_size = 16777216
We ended up getting a Percona consultant to assist with this problem. The main issue they identified was a large number of EXPLAIN queries were being executed. Turns out this was some debugging code that was left enabled (devel.module query logging for drupal devs). Disabling this saw CPU usage fall off a cliff.
There were a number of additional fixes which they recommended we implement.
Add a third node to the cluster to act as an observer and maintain the integrity of the cluster.
Add primary keys to tables that do not have one.
Change MyISAM tables to InnoDB.
Change wsrep_sst_method from rsync to xtrabackup-v2.
Set innodb_log_file_size to 512M.
Set innodb_flush_log_at_trx_commit to 2 as the cluster maintains the integrity of the data.
I hope this information helps anyone who runs into similar issues.
innodb_buffer_pool_instances should not be a function of the number of tables. The manual advocates that each instance be no smaller than 1GB. So, I suggest that even 92 is much too high. But my.cnf says only innodb_buffer_pool_instances = 4??
table_cache = 92
Maybe your comments are messed up? 500 would be more reasonable for table_open_cache. (table_cache is the old name.)
This may be the problem:
query_cache_size = 128M
Whenever a write occurs, all entries in the QC for the table(s) involved are purged from the QC. Recommend no more than 50M. Or, better yet, turn the QC off completely.
You have the slowlog turned on. What does pt-query-digest say are the top couple of queries? (This may be your best way to get a handle on the problem.)
Hello experts please help.
I'm having problem fixing opening customers page and catalog page which takes too much time to open the page.
Here are the details of the site:
-Magento Enterprise platform
-Having around one million of registered customer in the database
-Having around 10 thousand of products in the catalog page
The most much time consuming to load is the customers page which have around 1 million registered customer in the database. Please help by providing your suggestions on how to fix.
Maybe the problem is related to database because of large amount of data then what are the best steps to do?
Thank you..
Optimizing your MySQL configuration (your my.cnf file) could improve the performance of your site. The following is the configuration that some colleagues and I have used with success (adjust according to the resources available to you). Of particular importance, I think, are the innodb_buffer_pool_size and query_cache_size.
max_connections = 500
myisam_sort_buffer_size = 128M
key_buffer_size = 768M
join_buffer_size = 4M
read_buffer_size = 4M
read_rnd_buffer_size=16M
sort_buffer_size = 8M
table_cache = 1224
thread_cache_size = 32
wait_timeout = 100
interactive_timeout = 100
connect_timeout = 100
tmp_table_size = 256M
max_heap_table_size = 128M
max_allowed_packet = 2M
max_connect_errors = 999999999
thread_concurrency = 8
query_cache_limit = 2M
query_cache_size = 256M
query_cache_type = 1
query_prealloc_size = 16384
query_alloc_block_size = 16384
skip-name-resolve
expire_logs_days = 7
log-slow-queries = /var/log/mysqld.slow.log
innodb_thread_concurrency = 16
innodb_buffer_pool_size = 4G
innodb_additional_mem_pool_size=20M
innodb_log_file_size = 384M
We are on a RHEL 5.4 64 bit , 16 GB RAM 6x AMD Opteron.
So we have been experiencing this issue:
http://imgur.com/LMHi4
As you can see, the swap/paging starts to slowly creep up. Eventually this causes a problem. That large dip is when Mysqld was restarted. There is nothing else running on this system.
Mainly using Innodb with the following config;
key_buffer = 512M
max_allowed_packet = 128M
thread_stack = 192K
thread_cache_size = 8
table_cache = 812
thread_concurrency = 10
query_cache_limit = 4M
query_cache_size = 512M
join_buffer_size = 512K
innodb_additional_mem_pool_size = 16M
innodb_buffer_pool_size = 10G
innodb_file_io_threads = 4
innodb_thread_concurrency = 12
innodb_flush_log_at_trx_commit = 1
innodb_log_buffer_size = 8M
innodb_log_file_size = 512M
innodb_log_files_in_group = 3
innodb_max_dirty_pages_pct = 90
innodb_lock_wait_timeout = 120
I have heard a lot about turning off "swappiness" or setting it to '10'. But wouldn't that just call up an OOM and kill mysql?
why is this happening?
Just to answer my own question after 1+ years after the fact.
This is from experience.
MySQL has a tendency to swap, slowly. Especially if it's configured to use up all necessary memory.
In this case I simply both lowered the buffer_pool_size to a bit lower than recomended on the documents (<80% of physical memory) also changed the swappiness to 10.
Setting it to 10 would not cause an OOM as the system would still use the swap when needed just not so aggressively.