Periodic MySQL lockup when Wordpress is under heavy load - mysql

I have a MySQL 5.1.61 database running behind two load balanced Apache webservers hosting a fairly busy (100K uniques per day) Wordpress sites. I'm caching with Cloudflare, W3TC, and Varnish. Most of the time, the database server handles traffic very well. "show full processlist" shows 20-40 queries at any given time, with most being in the sleep state.
Periodically, though (particularly when traffic spikes or when a large number of comments are cleared), MySQL stops responding. I'll find 1000-1500 queries running, many "sending data", etc. No particular query seems to be straining the database (they're all standard Wordpress queries), but it just seems like the simultaneous volume of requests causes all queries to hang up. I'm (usually) still able to log in, to run "show full processlist", or other queries, but the 1000+ queries already in there just sit. The only solution seems to be to restart mysql (sometimes violently via kill -9 if I can't connect).
All tables are innodb, server has 8 cores, 24GB RAM, plenty of disk space, and the following is my my.cnf:
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
port=3306
skip-external-locking
skip-name-resolve
user=mysql
query_cache_type=1
query_cache_limit=16M
wait_timeout = 300
query_cache_size=128M
key_buffer_size=400M
thread_cache_size=50
table_cache=8192
skip-name-resolve
max_heap_table_size = 256M
tmp_table_size = 256M
innodb_file_per_table
innodb_buffer_pool_size = 5G
innodb_log_file_size=1G
#innodb_commit_concurrency = 32
#innodb_thread_concurrency = 32
innodb_flush_log_at_trx_commit = 0
thread_concurrency = 8
join_buffer_size = 256k
innodb_log_file_size = 256M
#innodb_concurrency_tickets = 220
thread_stack = 256K
max_allowed_packet=512M
max_connections=2500
# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).
old_passwords=1
#2012-11-03
#attempting a ram disk for tmp tables
tmpdir = /db/tmpfs01
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
Any suggestions how I can potentially improve MySQL config, or other steps to maintain database stability under heavy load?

Like has been said, think outside the box and do sone rooting around why these queries are slow or somehow hung. An oldie but a good source of problems even for (supposedly;) intelligent system engineers is load balancing causing issues across webserver or database sessions. With all that caching and load balancing going on, are you sure everything is always connecting end-to-end as intended?

I agree with alditis & Bjoern
I'm pretty noobish with mysql but running mysqltuner can reveal some config optimisations based on recent queries of the DB https://github.com/rackerhacker/MySQLTuner-perl
And if possible store the DB files on a physically separate partition from the OS, the OS can consume IO which slows the DB. Like with Bjoern's logrotate issue.

First have a look at basic system behavior at the moment of problems. Use both vmstat and iostat if you can find any issues. See if the system starts swapping (pi,po columns in vmstat) and if lots of IO is happening. This is the first step in debugging your problem.
Another source of useful information is SHOW INNODB STATUS. See for http://www.mysqlperformanceblog.com/2006/07/17/show-innodb-status-walk-through/ on how to interpret the output.
It might be that at a certain point in time your writes are killing read performance because they flush the query cache.

Related

my.cnf optimization according to the configuration

I am trying many problems with MySQL, with high memory usage and especially with high CPU usage.
I have a dedicated server with the following configuration:
8 CPU Intel(R) Xeon(R) CPU E3-1240 v6 # 3.70GHz
16GB DDR3
SO Linux with cPanel/WHM + MySQL
Following is my.cnf file:
[mysqld]
max_connections=1000
wait_timeout=1000
interactive_timeout=1000
long_query_time=50
slow_query_log = 0
#slow_query_log_file = /var/log/mysql/slow-query.log
default-storage-engine=MyISAM
log-error=/var/lib/mysql/dc.leilaoweb.com.err
max_allowed_packet=268435456
local-infile=0
event_scheduler = on
tmp_table_size=300M
max_heap_table_size=128M
open_files_limit=65000
performance-schema=1
innodb_file_per_table=1
innodb_log_file_size=512M
innodb_buffer_pool_size=8G
key_buffer_size=512M
innodb_buffer_pool_instances=8
# config cache
query_cache_limit=8M
query_cache_size=256M
query_cache_type=1
table_open_cache=6000
table_cache=5000
thread_cache_size=96
#bind-address=127.0.0.1
#skip-networking
#performance_schema=ON
skip-name-resolve
How could I improve this setting to make queries faster and not raise server load?
It's a funny question that asks for help with query optimization, in which no specific query is mentioned.
Here are some tips on configuration options:
default-storage-engine=MyISAM
Change the default storage engine to InnoDB, and make sure all your existing tables are InnoDB. Don't use MyISAM.
query_cache_size=256M
query_cache_type=1
Set the query cache size and type to 0. The query cache is useful in such rare conditions that it has become deprecated, and removed in MySQL 8.0. It's better to cache query results in your application code, on a case-by-case basis.
innodb_buffer_pool_size=8G
If you have a lot more data than 8G, consider increasing the size of the buffer pool. The more of your data and indexes that resides in RAM, the better it will be for performance. But there's no further benefit to adding RAM once your data and indexes are 100% cached in the buffer pool.
And of course do not overallocate the buffer pool such that it causes the server to start swapping. That will kill performance (or else Linux's OOM killer will terminate mysqld if you have no swap).
key_buffer_size=512M
No need for extra memory allocated to the key buffer if you don't use MyISAM.
There may be other tuning parameters that can give benefit, but since you have said nothing about your queries or server activity, there's no way to guess what those would be.
You're better off focusing on indexes and query design.
In general, optimization naturally improves some queries at the expense of other queries. So you can make an optimization strategy only after you know which queries you need to optimize for.

Mysql "memory usage" increasing and increasing

I have a really big website built old fashioned with PHP & MYSQL.
I have more than 1,000 different queries in my website, on different PHP pages, and it's really hard to update all of them to MYSQLI.
I bought VPS server with 4GB RAM and in the past months I experience really slow page loads.
When I restart my server, everything runs smoothly, but after couple of hours/days the website is getting muchu slower with loading time of 3+ seconds for a page load. I notice that the mysqld service is increasing and increasing in memory usage, from 80MB on server restart it reached about 400MB and more of usage.
I put in the end of my index.php mysql_close() but it seem like the connection number still increasing.
Questions
What can cause unlimited increment in mysql memory usage?
Updating all my queries to MYSQLI may improve the performance?
Some information:
innodb_version
5.5.31
protocol_version
10
slave_type_conversions
version
5.5.31-log
version_comment
MySQL Community Server (GPL)
version_compile_machine
x86_64
version_compile_os
Linux
storage engine: Mixed (Somes tables are INNODB,some tables are MyISAM.
my.cnf:
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
max-connections=100000
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
thread_cache_size=5
table_open_cache=99390
sort_buffer_size=512M
read_rnd_buffer_size=512M
query_cache_size=512M
query_cache_limit = 16M
query_cache_type = 1
slow_query_log=1
slow_query_log_file=slow_query_log.log #
long_query_time=5
log-queries-not-using-indexes=1
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
I have about ~6-7 queries running when I use show processlist
max-connections=100000 -- Yikes! Drop to 1000
table_open_cache=99390 -- drop to, say, 2000
sort_buffer_size=512M -- drop to 1% of RAM, say, 40M
read_rnd_buffer_size=512M -- ditto
query_cache_size=512M -- too big; slows things down; drop to 40M
long_query_time=5 -- not low enough to catch much; drop to 2
log-queries-not-using-indexes=1 -- clutters the slowlog without providing much info; change to 0
You did not say which Engine you are using. Read this for advice on MyISAM and InnoDB.
1000 pages -- that's not too many.
Which web server? If Apache, don't set MaxClients to more than 20.
2022 postscript: The query_cache_size and query_cache_type variables have been removed from mySQL 8.0.3+.

Why does mySQL use over 200000 handles on startup?

I have a server (MS Windows Server 2012 R2 Datacenter 64GB RAM 2TB+ disk space) running mySQL 5.0. When I start the mySQL server, right off the bat it allocates 214,000 handles. Is that normal? I've been looking into this because I am trying to run an application that executes multiple unique queries over thousands of records and it is just crawling.
I have changed query_cache_size from 160M to 0M in the my.ini file as query caching will not benefit this application. Still no change in handles. I'm not sure what else I can do to fix this. Does anyone have any ideas?
The server is:
MySQL 5.0.60sp1-enterprise-gpl-nt
There are a ton of options. Here are what I think are the relevant ones (I could be wrong I am not an expert)
[mysqld]
default_storage_engine=InnoDB
innodb_file_per_table
innodb_flush_method=unbuffered
lower_case_table_names=2
max_allowed_packet=48M
max_heap_table_size=64777216
max_connections=3010
query_cache_size=0M
table_cache=6020
tmp_table_size=16M
thread_cache_size=64
myisam_max_sort_file_size=100G
myisam_max_extra_sort_file_size=100G
key_buffer_size=20M
read_buffer_size=64K
read_rnd_buffer_size=256K
innodb_additional_mem_pool_size=15M
innodb_flush_log_at_trx_commit=1
innodb_buffer_pool_size=709M
innodb_thread_concurrency=50

MySQL restore performance

I have what seems to be a slowing MySQL restore, and am looking for some tuning advice (I am a PostgreSQL and SQL Server guy).
The dev server has 48GB of RAM, 8 cores, running Centos 6.2 64-bit and MySQL 5.1.61 (same as production MySQL), and 4 x 7200 RPM SAS drives in software managed RAID-10 / XFS. The only MySQL client process is the restore. The dump was taken with a plain mysqldump of all databases on the production server.
I have applied some of the options from http://derwiki.tumblr.com/post/24490758395/loading-half-a-billion-rows-into-mysql, including setting FOREIGN_KEY_CHECKS and UNIQUE_CHECKS to zero. I have included my.cnf below.
Monitoring the restore with mytop and pv (pv backup.sql | mysql -u root -p), it appears that the INSERT INTO statements begin to progressively get slower. qps shown by mytop starts at 3, and drops to 0 at 60% through the dump file. Not sure how accurate mytop is in this case, as 3 inserts (with values) still seems slow. htop shows < 10% CPU utilization on the CPU used by MySQL, and less than 8GB of the 48GB of RAM is being utilized.
Different databases, but similar restore techniques, run about 5-10x faster on the same server using PostgreSQL.
Ideas?
[mysqld]
# my.cnf
socket=/var/lib/mysql/mysql.sock
user=mysql
symbolic-links=0
slow-query-log
long_query_time = 60
log-slow-admin-statements
slow_query_log_file = /var/log/mysql_slow.log
innodb_buffer_pool_size = 2G
max_allowed_packet = 1G
key_buffer_size = 1G
concurrent_insert = 1
innodb_flush_log_at_trx_commit = 2
bulk_insert_buffer_size = 1G
innodb_flush_method = O_DIRECT
Sounds like your innodb indexes are slowing you down. If can change the way you dump the database you can remove all non-primary key indexes load the data then re-add them. Better still order the data to be loaded by the primary key. This is probably too much to ask.
Sounds like you are already aware of these tips: http://dev.mysql.com/doc/refman/5.5/en/optimizing-innodb-bulk-data-loading.html
The flush to disk operation (innodb_flush_log_at_trx_commit = 2) may be happening many times a second. Check your innodb_log_file_size * innodb_log_files_in_group is sufficient to avoid writing to disk too often.
(I assumed you are using Innodb from your settings)

mysql always using maximum connection

I have LAMP server having 4 core CPU and 32 GB RAM.We are running a large website on it. I have following issues now in my server.
When I use Mysqlreport tool to monitor the mysql server i am always seeing the connection usage as below. And the users reporting connection issues in the website.
_
Connections _______________________________
Max used 251 of 250 %Max: 100.40 Total 748.71k 3.5/s
But when I use "show process list" command it will output nothing. We are using MyISAM engine for all our DBs.
My Mysql Config File is pasted below:
######################
[mysqld]
max_connections = 250
set-variable=local-infile=0
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
skip-name-resolve
skip-bdb
wait_timeout = 60
thread_cache_size = 100
table_cache = 1024
key_buffer = 384M
log_slow_queries=/mysql-log/mysql-slow.log
query-cache-size=512M
query-cache-type=1
query_cache_limit=20M
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
#
Who is using my Mysql connection pool? How can I find it?
And i have another issue.
Sometimes the Load average goes beyond 4-8 range. See below:
13:40:02 up 2 days, 10:39, 0 users, load average: 5.03, 1.68, 0.93
At that time i can see mysql is the top consumer of the CPU. Is there any optimization needed in mysql Server?
Please reply to my above two queries.
Thanks in advance,
Aruns
I noticed that you are already using MySQL Query Cache.
Have you tried using MySQL workbench to connect to your MySQL database? It offers a graphical way of checking out your MySQL database, including process list.
If you are behind a firewall, try using
show full processlist
However, I think this will not really help.
I would assume that you are using PHP - MySQL to serve out web pages. So this means that you will mostly find that the MySQL connections are made from PHP. To see how many apache threads are running at one time. You can try:
ps aux |grep httpd |wc -l
If you have many more threads on apache, connecting to MySQL, then you know you have a problem.
You mentioned that you have a busy site, therefore, therefore the real answer to your problem is to cache your content, probably using memcached. The idea is to reduce the hits to your MySQL server. Your server has plenty of RAM and perfect for memcached.
This idea is reuse the content for a certain amount of time depending if the content needs freshness:
<?php
$cachedContent = $memcache->get("cacheKey");
if (!$cachedContent) {
// retrieve from MySQL and formulate HTML here
// you can use obstart so that you can reuse your previous code
ob_start();
// your previous code here
// echo or
?>
<div>
previous generated content from mysql
</div>
<?php
// now cachedContent contains your previous generated HTML
$cachedContent = ob_get_contents();
// set content into memcache
$memcache->set("cacheKey", $cachedContent, false, 1800);
// clear the buffer
ob_end_flush();
}
echo $cachedContent;
?>
You need to find which contents to cache first. Good places to start are:
Inefficient bits on the index.php page (I assume this will be one of the most hit page)
Check your GA for most hit pages.
Check your MySQL slow queries and cache those contents.
Add the below variables in your my.cnf :
If you are going to use only MyISAM engine below variables will give the best result according to your Hardware configuration .
max_allowed_packet = 1M
table_open_cache = 250
sort_buffer_size = 2M
thread_stack = 128K
join_buffer_size = 1M
query_cache_limit = 400k
query_cache_size = 300M
key_buffer_size = 5G
read_buffer_size = 2M
read_rnd_buffer_size = 2M
bulk_insert_buffer_size = 8M
myisam_sort_buffer_size = 8M
myisam_max_sort_file_size = 6G
myisam_recover=FORCE,BACKUP
But depends on your DB size and how your application is accessing(fetching) the data, we can modify the above variables.
Reduce the wait_timeout=30.
Regarding this really i dont have any clue :( ..
Connections _________________
Max used 251 of 250 %Max: 100.40 Total 748.71k 3.5/s
SHOW PROCESSLIST\g
SHOULD PROVIDE THE LIST OF PROCESS ARE CONNECTED TO DB IN ANY STATE (SLEEP/READING..... ETC.,)
Add the above variables in your my.cnf and restart the server .