I have dedicated linux server with 4 cores and 8GB RAM. I have one webportal developed in crash php and mysql. On single page it pushing near about 15-20 queries. On each page I have included mysql connection. My Problem is everytime mysql service taking 50% cpu while sometime 160-200%. When it reaches to 200%, my server get hang and need to restart.
SHOW PROCESSLIST; showing no queries pending.
I have optimized all tables using OPTIMIZE TABLE query. Some of tables are INNODB while some are MYISAM. I have checked slow query logs who is tracking queries greater than 1 second. There are few queries which taking time not greater than 2-3 seconds.
my.cnf file contain
default-storage-engine=MyISAM
innodb_file_per_table=1
performance-schema=0
max_allowed_packet=268435456
slow_query_log=1
slow_query_log_file=/var/lib/mysql/slow.log
long_query_time=1
log_queries_not_using_indexes=0
log_error=/var/lib/mysql/mysql_error.log
[mysqld_safe]
log_error=/var/lib/mysql/mysql_safe_error.log
Related
I'm running a NodeJS with MySQL (InnoDB) for a game server (player info, savedata, stuff). Server is HTTP(S) based so nothing realtime.
I'm having these weird spikes as you can see from the graphs below (first graph is requests/sec and last graph is queries/sec)
On the response time graph you can see max response times with purple and avg response times with blue. Even with those 10-20k peaks avg stays at 50-100ms as do 95% of the requests.
I've been digging around and found that the slow queries are nothing special. Usually update query with savedata (blob of ~2kb) or player profile update which modifies like username or so. No joins or anything like that. We're talking about tables with less than 100k rows.
Server is running in Azure on Ubuntu 14.04 with MySQL 5.7 using 4 cores and 7GB of RAM.
MySQL settings:
innodb_buffer_pool_size=4G
innodb_log_file_size=1G
innodb_buffer_pool_instances=4
innodb_log_buffer_size=4M
query_cache_type=0
tmp_table_size=64M
max_heap_table_size=64M
sort_buffer_size=32M
wait_timeout=300
interactive_timeout=300
innodb_file_per_table=ON
Edit: It turned out that the problem was never MySQL performance but Node.js performance before the SQL queries. More info here: Node.js multer and body-parser sometimes extremely slow
check your swappiness (suppose to be 0 mysql machines maximizing ram usage):
> sysctl -A|grep swap
vm.swappiness = 0
with only 7G of RAM and 4G of just buffer pool, your machine will swap if swappiness is not zero.
could you post your swap graph and used memory. 4G buffer is "over the edge" for 7G ram. For 8G ram, I would give 3G as you have +1G on everything else mysql wise + 2G on OS.
Also you have 1G for transaction log file and I assume you have two log files. Do you have so many writes to have such large files? You can use this guide: https://www.percona.com/blog/2008/11/21/how-to-calculate-a-good-innodb-log-file-size/
I have a really big website built old fashioned with PHP & MYSQL.
I have more than 1,000 different queries in my website, on different PHP pages, and it's really hard to update all of them to MYSQLI.
I bought VPS server with 4GB RAM and in the past months I experience really slow page loads.
When I restart my server, everything runs smoothly, but after couple of hours/days the website is getting muchu slower with loading time of 3+ seconds for a page load. I notice that the mysqld service is increasing and increasing in memory usage, from 80MB on server restart it reached about 400MB and more of usage.
I put in the end of my index.php mysql_close() but it seem like the connection number still increasing.
Questions
What can cause unlimited increment in mysql memory usage?
Updating all my queries to MYSQLI may improve the performance?
Some information:
innodb_version
5.5.31
protocol_version
10
slave_type_conversions
version
5.5.31-log
version_comment
MySQL Community Server (GPL)
version_compile_machine
x86_64
version_compile_os
Linux
storage engine: Mixed (Somes tables are INNODB,some tables are MyISAM.
my.cnf:
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
max-connections=100000
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
thread_cache_size=5
table_open_cache=99390
sort_buffer_size=512M
read_rnd_buffer_size=512M
query_cache_size=512M
query_cache_limit = 16M
query_cache_type = 1
slow_query_log=1
slow_query_log_file=slow_query_log.log #
long_query_time=5
log-queries-not-using-indexes=1
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
I have about ~6-7 queries running when I use show processlist
max-connections=100000 -- Yikes! Drop to 1000
table_open_cache=99390 -- drop to, say, 2000
sort_buffer_size=512M -- drop to 1% of RAM, say, 40M
read_rnd_buffer_size=512M -- ditto
query_cache_size=512M -- too big; slows things down; drop to 40M
long_query_time=5 -- not low enough to catch much; drop to 2
log-queries-not-using-indexes=1 -- clutters the slowlog without providing much info; change to 0
You did not say which Engine you are using. Read this for advice on MyISAM and InnoDB.
1000 pages -- that's not too many.
Which web server? If Apache, don't set MaxClients to more than 20.
2022 postscript: The query_cache_size and query_cache_type variables have been removed from mySQL 8.0.3+.
I am currently using MySQL. I have noticed several times my front application runs slow after some usage. when i checked server status in MySQL workbench. I have noticed that innodb buffer usage was going to 100% . so I increased parameter innodb_buffer_pool_size to 1G in my.ini file of xampp. but innodb is not flushing the buffer and application runs slow after some time. is there any other parameters to change as-well?
consider using a size for innodb_buffer_pool_size of 70%-80% of available ram. Depending on how big your dataset is, you should increase the size.
I have a MySQL 5.1.61 database running behind two load balanced Apache webservers hosting a fairly busy (100K uniques per day) Wordpress sites. I'm caching with Cloudflare, W3TC, and Varnish. Most of the time, the database server handles traffic very well. "show full processlist" shows 20-40 queries at any given time, with most being in the sleep state.
Periodically, though (particularly when traffic spikes or when a large number of comments are cleared), MySQL stops responding. I'll find 1000-1500 queries running, many "sending data", etc. No particular query seems to be straining the database (they're all standard Wordpress queries), but it just seems like the simultaneous volume of requests causes all queries to hang up. I'm (usually) still able to log in, to run "show full processlist", or other queries, but the 1000+ queries already in there just sit. The only solution seems to be to restart mysql (sometimes violently via kill -9 if I can't connect).
All tables are innodb, server has 8 cores, 24GB RAM, plenty of disk space, and the following is my my.cnf:
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
port=3306
skip-external-locking
skip-name-resolve
user=mysql
query_cache_type=1
query_cache_limit=16M
wait_timeout = 300
query_cache_size=128M
key_buffer_size=400M
thread_cache_size=50
table_cache=8192
skip-name-resolve
max_heap_table_size = 256M
tmp_table_size = 256M
innodb_file_per_table
innodb_buffer_pool_size = 5G
innodb_log_file_size=1G
#innodb_commit_concurrency = 32
#innodb_thread_concurrency = 32
innodb_flush_log_at_trx_commit = 0
thread_concurrency = 8
join_buffer_size = 256k
innodb_log_file_size = 256M
#innodb_concurrency_tickets = 220
thread_stack = 256K
max_allowed_packet=512M
max_connections=2500
# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).
old_passwords=1
#2012-11-03
#attempting a ram disk for tmp tables
tmpdir = /db/tmpfs01
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
Any suggestions how I can potentially improve MySQL config, or other steps to maintain database stability under heavy load?
Like has been said, think outside the box and do sone rooting around why these queries are slow or somehow hung. An oldie but a good source of problems even for (supposedly;) intelligent system engineers is load balancing causing issues across webserver or database sessions. With all that caching and load balancing going on, are you sure everything is always connecting end-to-end as intended?
I agree with alditis & Bjoern
I'm pretty noobish with mysql but running mysqltuner can reveal some config optimisations based on recent queries of the DB https://github.com/rackerhacker/MySQLTuner-perl
And if possible store the DB files on a physically separate partition from the OS, the OS can consume IO which slows the DB. Like with Bjoern's logrotate issue.
First have a look at basic system behavior at the moment of problems. Use both vmstat and iostat if you can find any issues. See if the system starts swapping (pi,po columns in vmstat) and if lots of IO is happening. This is the first step in debugging your problem.
Another source of useful information is SHOW INNODB STATUS. See for http://www.mysqlperformanceblog.com/2006/07/17/show-innodb-status-walk-through/ on how to interpret the output.
It might be that at a certain point in time your writes are killing read performance because they flush the query cache.
We are running 2 web servers that host a Magento eCommerce site and 1 MySQL database server on the Amazon EC2.
We are experiencing major performance issues, deadlocks, 'lock wait timeout exceeded' errors etc on the MySQL server and really struggling to get these resolved.
We have recently upgraded the db server to an m1.xlarge instance (from m1.large) and still we are continuing to experience these problems.
We've been attributing these issues to bad disk IO we often see on the EC2 servers, but recently I've seen issues with deadlocks etc even when the disk IO is fine.
The "sar" command is showing that we have pretty poor disk IO performance at peak times or when we perform database intensive operations like creating invoices via the Magento API. We often see the iowait go up to over 20%.
Below is a link to a screenshot show the results of an "mtop" during a recent problem we had where a query was causing a slow down of the entire database:
http://i.imgur.com/AARlc.png
This screenshot shows one or other query that is holding up the rest of the queries from executing. It also shows quite a low load average, often we see the load average going up to 3.0 when an intensive command is being executed.
Here are the my.cnf settings:
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
symbolic-links=0
innodb_file_per_table=1
key_buffer=512M
max_allowed_packet=64M
table_cache=512
innodb_thread_concurrency=5
innodb_buffer_pool_size=4976M
innodb_additional_mem_pool_size=8M
innodb_log_file_size=128M
innodb_log_buffer_size=8M
thread_cache_size=150
sort_buffer_size=4M
read_buffer_size=4M
read_rnd_buffer_size=2M
myisam_sort_buffer_size=64M
tmp_table_size=256M
query_cache_type=1
query_cache_size=128M
max_connections=400
wait_timeout=28800
innodb_lock_wait_timeout=120
max_heap_table_size=256M
long_query_time=3
log-slow-queries=...mysql-slow.log
[mysqld_safe]
log-error=...mysqld.log
pid-file=...mysqld.pid
We have used the pt-query-digest function extensively to analyze our MySQL slow query log.
Basically we are seeing that the sales_flat_quote table is extremely slow with updates and inserts, but so are a number of other tables.
sales_flat_quote is not particularly large though, there are only around 100k rows in the table.
Several root causes are possible:
Some of your slow queries may be locking tables, thus queueing other queries
Your queries may not be optimized
Your queries may need more indexes on some tables
Check your slowest queries using this official tool:
mysqldumpslow
We have observed similar hogs on our mysql EC2 server, however, we quickly migrated our database to an RDS instance. Since then, there have been very few problems. One might argue that RDS are costly and EC2 are not, however, you would also save on the time spent on managing databases/daily backups etc.
I recommend to migrate your database to an RDS instance.