mysql always using maximum connection - mysql

I have LAMP server having 4 core CPU and 32 GB RAM.We are running a large website on it. I have following issues now in my server.
When I use Mysqlreport tool to monitor the mysql server i am always seeing the connection usage as below. And the users reporting connection issues in the website.
_
Connections _______________________________
Max used 251 of 250 %Max: 100.40 Total 748.71k 3.5/s
But when I use "show process list" command it will output nothing. We are using MyISAM engine for all our DBs.
My Mysql Config File is pasted below:
######################
[mysqld]
max_connections = 250
set-variable=local-infile=0
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
skip-name-resolve
skip-bdb
wait_timeout = 60
thread_cache_size = 100
table_cache = 1024
key_buffer = 384M
log_slow_queries=/mysql-log/mysql-slow.log
query-cache-size=512M
query-cache-type=1
query_cache_limit=20M
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
#
Who is using my Mysql connection pool? How can I find it?
And i have another issue.
Sometimes the Load average goes beyond 4-8 range. See below:
13:40:02 up 2 days, 10:39, 0 users, load average: 5.03, 1.68, 0.93
At that time i can see mysql is the top consumer of the CPU. Is there any optimization needed in mysql Server?
Please reply to my above two queries.
Thanks in advance,
Aruns

I noticed that you are already using MySQL Query Cache.
Have you tried using MySQL workbench to connect to your MySQL database? It offers a graphical way of checking out your MySQL database, including process list.
If you are behind a firewall, try using
show full processlist
However, I think this will not really help.
I would assume that you are using PHP - MySQL to serve out web pages. So this means that you will mostly find that the MySQL connections are made from PHP. To see how many apache threads are running at one time. You can try:
ps aux |grep httpd |wc -l
If you have many more threads on apache, connecting to MySQL, then you know you have a problem.
You mentioned that you have a busy site, therefore, therefore the real answer to your problem is to cache your content, probably using memcached. The idea is to reduce the hits to your MySQL server. Your server has plenty of RAM and perfect for memcached.
This idea is reuse the content for a certain amount of time depending if the content needs freshness:
<?php
$cachedContent = $memcache->get("cacheKey");
if (!$cachedContent) {
// retrieve from MySQL and formulate HTML here
// you can use obstart so that you can reuse your previous code
ob_start();
// your previous code here
// echo or
?>
<div>
previous generated content from mysql
</div>
<?php
// now cachedContent contains your previous generated HTML
$cachedContent = ob_get_contents();
// set content into memcache
$memcache->set("cacheKey", $cachedContent, false, 1800);
// clear the buffer
ob_end_flush();
}
echo $cachedContent;
?>
You need to find which contents to cache first. Good places to start are:
Inefficient bits on the index.php page (I assume this will be one of the most hit page)
Check your GA for most hit pages.
Check your MySQL slow queries and cache those contents.

Add the below variables in your my.cnf :
If you are going to use only MyISAM engine below variables will give the best result according to your Hardware configuration .
max_allowed_packet = 1M
table_open_cache = 250
sort_buffer_size = 2M
thread_stack = 128K
join_buffer_size = 1M
query_cache_limit = 400k
query_cache_size = 300M
key_buffer_size = 5G
read_buffer_size = 2M
read_rnd_buffer_size = 2M
bulk_insert_buffer_size = 8M
myisam_sort_buffer_size = 8M
myisam_max_sort_file_size = 6G
myisam_recover=FORCE,BACKUP
But depends on your DB size and how your application is accessing(fetching) the data, we can modify the above variables.
Reduce the wait_timeout=30.
Regarding this really i dont have any clue :( ..
Connections _________________
Max used 251 of 250 %Max: 100.40 Total 748.71k 3.5/s
SHOW PROCESSLIST\g
SHOULD PROVIDE THE LIST OF PROCESS ARE CONNECTED TO DB IN ANY STATE (SLEEP/READING..... ETC.,)
Add the above variables in your my.cnf and restart the server .

Related

Database performance drop, after upgrade to MySQL 8.0.20

After upgraded the MySQL from version 5.7 to 8.0, I found out that the database performance is significant drop.
Before upgrade the MySQL the CPU usage is stable around 30%+-, but after upgraded the CPU usage is become unstable and frequently having large spike.
And recently I test out something very interesting, I'm keep run a same query for a few time, and found out that the duration taken becomes longer and longer. as per picture shown below.
I had read a lot of article and stack overflow post, but none of the solution is really get help.
So hope that someone can share some idea or experience on tuning the MySQL8.0 with me.
Will very appreciate it.
Please let me know if needed any info for further investigate.
Config my.ini:-
key_buffer_size = 2G
max_allowed_packet = 1M
;Added to reduce memory used (minimum is 400)
table_definition_cache = 600
sort_buffer_size = 4M
net_buffer_length = 8K
read_buffer_size = 2M
read_rnd_buffer_size = 2M
myisam_sort_buffer_size = 2G
;Path to mysql install directory
basedir="c:/wamp64/bin/mysql/mysql8.0.20"
log-error="c:/wamp64/logs/mysql.log"
;Verbosity Value 1 Errors only, 2 Errors and warnings , 3 Errors, warnings, and notes
log_error_verbosity=2
;Path to data directory
datadir="c:/wamp64/bin/mysql/mysql8.0.20/data"
;slow_query_log = ON
;slow_query_log_file = "c:/wamp64/logs/slow_query.log"
;Path to the language
;See Documentation:
; http://dev.mysql.com/doc/refman/5.7/en/error-message-language.html
lc-messages-dir="c:/wamp64/bin/mysql/mysql8.0.20/share"
lc-messages=en_US
; The default storage engine that will be used when create new tables
default-storage-engine=InnoDB
; New for MySQL 5.6 default_tmp_storage_engine if skip-innodb enable
; default_tmp_storage_engine=MYISAM
;To avoid warning messages
secure_file_priv="c:/wamp64/tmp"
skip-ssl
explicit_defaults_for_timestamp=true
; Set the SQL mode to strict
sql-mode=""
;sql-mode="STRICT_ALL_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_ZERO_DATE,NO_ZERO_IN_DATE,NO_AUTO_CREATE_USER"
;skip-networking
; Disable Federated by default
skip-federated
; Replication Master Server (default)
; binary logging is required for replication
;log-bin=mysql-bin
; binary logging format - mixed recommended
;binlog_format=mixed
; required unique id between 1 and 2^32 - 1
; defaults to 1 if master-host is not set
; but will not function as a master if omitted
server-id = 1
; Replication Slave (comment out master section to use this)
; New for MySQL 5.6 if no slave
skip-slave-start
; The InnoDB tablespace encryption feature relies on the keyring_file
; plugin for encryption key management, and the keyring_file plugin
; must be loaded prior to storage engine initialization to facilitate
; InnoDB recovery for encrypted tables. If you do not want to load the
; keyring_file plugin at server startup, specify an empty string.
early-plugin-load=""
;innodb_data_home_dir = C:/mysql/data/
innodb_data_file_path = ibdata1:12M:autoextend
;innodb_log_group_home_dir = C:/mysql/data/
;innodb_log_arch_dir = C:/mysql/data/
; You can set .._buffer_pool_size up to 50 - 80 %
; of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 4G
; Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 16M
innodb_log_buffer_size = 8M
innodb_thread_concurrency = 64
innodb_flush_log_at_trx_commit = 2
log_bin_trust_function_creators = 1;
innodb_lock_wait_timeout = 120
innodb_flush_method=normal
innodb_use_native_aio = true
innodb_flush_neighbors = 2
innodb_autoinc_lock_mode = 1
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
; Remove the next comment character if you are not familiar with SQL
;safe-updates
[isamchk]
key_buffer_size = 20M
sort_buffer_size = 20M
read_buffer_size = 2M
write_buffer_size = 2M
[myisamchk]
key_buffer_size = 256M ;20M hys
sort_buffer_size_size = 20M
read_buffer_size = 2M
write_buffer_size = 2M
[mysqlhotcopy]
interactive-timeout
[mysqld]
port = 3306
skip-log-bin
default_authentication_plugin= mysql_native_password
max_connections = 400
max_connect_errors = 100000
innodb_read_io_threads = 32
innodb_write_io_threads = 8
innodb_thread_concurrency = 64
Hardware:-
Ram: 16GB
CPU: 4 Cores 3.0 Ghz
SHOW GLOBAL STATUS:
https://pastebin.com/FVZrgnTw
SHOW ENGINE INNODB STATUS:
https://pastebin.com/Rewp84Gi
SHOW GLOBAL VARIABLES:
https://pastebin.com/3v6cM6KZ
Rate Per Second = RPS
Suggestions to consider for your my.ini [mysqld] section
It is unusual to have more than 1 [mysqld] section in the my.ini configuration
the section you have near the end of you my.ini could be moved to be just before
[mysqldump] to avoid confusion.
innodb_lru_scan_depth=100 # from 1024 to conserve 90% of CPU cycles used for function
key_buffer_size=16M # from 1G to conserve RAM - you are not using MyISAM data tables
read_rnd_buffer_size=64K # from 2M to reduce handler_read_rnd_next RPS of 1,872,921
innodb_io_capacity=900 # from 200 to more of your rotating drive IOPS capacity
You should find query completion time and CPU busy reduced with these changes.
select_scan averages 41 RPS and is caused by indexes not being available, causing delays.
For additional suggestions, view profile, Network profile for contact info, FAQ, additional tips and free downloadable Utility Scripts to assist with performance tuning.
I have found out the root cause, and post it in https://dba.stackexchange.com/questions/271785/query-performance-become-slower-after-upgrade-to-mysql-8-0-20 .
Thanks a lot for all the reply and suggestion. Appreciate it.
[Update: solved the problem at our site]
Actually I currently have had a very similar (maybe the same?) issue.
We have
Windows Server 2016, 4 CPUs, 32 GB RAM
MySQL 8 Community Edition
Java / Apache Tomcat based application on top
For 2 weeks we experienced severe application problems, with mysqld process taking 100% CPU as soon as application interaction happens -- rendering the server completely unresponsive.
The last change to the setup before this degradation was updating MySQL from 8.0.18 to 8.0.20 due to security fixes.
Query monitoring shows many occurrences of the same (simple) query
SELECT COUNT(1) FROM xxxxx;
which take 5-10 seconds (although the table only has about 3 rows, so it should rather take 5 milliseconds!).
One hypothesis was this MySQL issue: https://bugs.mysql.com/bug.php?id=99593
However the recommended workaround did not help me.
Solution for us:
Apparently there was an additional bug in MySQL Community Edition, introduced in 8.0.19 or 8.0.20.
After downgrading MySQL to 8.0.18 everything worked fine again!
Additional note:
Downgrading is not supported by MySQL!
Actually in order to provide a downgraded DB on the same machine, I...
did a backup of the application schema (with mysqldump command)
did a manual installation of MySQL 8.0.18 binaries (no installer)
created an additional MySQL instance (different data directory, different port)
imported the backup into the new instance (with mysql command)
created roles and permissions exactly like "before"
switched application config to new MySQL port

Configuring MySQL server for large amount of queries

I have deployed 64 core 300 GB+ RAM Amazon server and installed virtualmin on it. This parent server is to be used as a database server. It stores a Laravel jobs queue with more than 2,000,000 jobs.
I am having trouble configuring MySql server in the parent server . Currently the parent server is connected to AWS Auto Scaling Group with many child servers. The child servers in Scaling Group read data from parent server, process the data and store the result back in parent server. Each server, on average, completes 3 jobs from the parent server.
I want to connect about 1000 child server. The problem arises when there are more than 500 server connected. At that time, the MYSql server in parent server becomes very slow. The child server receive data very slowly. It doesnt crash or give connection limit error.
I have tried various variable settings and increased limits, but so far I am unable to solve the issue, My current /etc/my.cnf config is as below:
symbolic-links=0
innodb_file_per_table = 1
myisam_sort_buffer_size = 64M
read_rnd_buffer_size = 32M
net_buffer_length = 12M
read_buffer_size = 128M
sort_buffer_size = 128M
table_open_cache = 64
max_allowed_packet = 5M
key_buffer_size = 512M
max_connections = 100000
innodb_buffer_pool_size=64G
tmp_table_size= 4095M
max_heap_table_size = 20G
Rest of the settings are default.
myisam_sort_buffer_size = 64M
key_buffer_size = 512M
These settings are for MyISAM only. You shouldn't be using MyISAM. It's not the '90s any more.
table_open_cache = 64
This is absurdly low. It is almost certainly why everything grinds to a halt when you have many machines connected.
tmp_table_size= 4095M
max_heap_table_size = 20G
Are you really going to have temporary tables of anywhere near that size?
innodb_buffer_pool_size=64G
How big is your data? If you are anywhere near justifying 300GB of RAM, I presume you must have about a terabyte of data. Unless you have much less data than the server size would imply, this should probably be more like 250GB (and tmp and heap table sizes should be much smaller).
sort_buffer_size = 128M
This is absurdly high. sort_buffer_size is one of many settings that you almost certainly shouldn't be touching.
If you aren't 100% sure you know what each setting does, you should leave it at defaults. What you have here is guaranteed to completely cripple your immensely powerful server.

How can I reduce Xampp mysqld memory usage

Well i've set myself to learn php and mysql, bought a book installed XAMPP inside a virtual machine. However i was thinking mysql wouldnt use that much memory, but it uses 500Mb.
And i have not even created anything in it, i'm not sure if thats normal.
I had choosen the Xampp light setup, since my only interest here is Php and mysql.
My goal is just to create a few simple databases with a web interface.
And i'm at the first steps of learning php
I'm not new to programming, i know a long list of computer languages
However i am new to mysql and php appache.
Can someone tell me what to do to reduce memory of mysql ?.
As currently i run into problems, the host running the virtual machine is not that heavy
I think this is a bug in the (windows) installer of MySQL which could be also used in xampp.
http://bugs.mysql.com/bug.php?id=68287
Try looking for table_definition_cache in my.cnf and lower this to ~ 200
If you're not storing anything in it as you said, then the 500MB isn't for the 'MYSQL' process but it's for the whole XAMPP thing.
Long story short, this is the Mysql configuration file path :
/etc/mysql/my.cnf
in this file you can find :
key_buffer = 8M
max_connections = 30
query_cache_size = 8M
query_cache_limit = 512K
thread_stack = 128K
This is the MAXIMUM memory the Mysql is using.
Hope this helps.
mysql in xampp 400mb cpu
You add in php.ini two line end :
[mysqld]
table_definition_cache = 400
I sure with you :) .

An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full - wamp server

I am creating a browser-based game using PHP,Javascript and HTML5 and I am testing it on a local wamp server. However sometimes I get the following error:
Warning: mysqli::mysqli(): (HY000/2002): An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full.
It flashes on the screen (like a SCREAM error) and then dissapears. It doesn't affect the functionality of the application, but sometimes it can stay up to 5-10s on the screen. I mention that I am using a lot of AJAX requests to operate modifications in the database, the most notable one being the fact that i have a timer that reads the svg map set at 2s. I found the following lines in the my.ini (mysql configuration file). Should I try to modify some of these values?
# The MySQL server
[wampmysqld]
port = 3306
socket = /tmp/mysql.sock
key_buffer = 16M
max_allowed_packet = 1M
table_cache = 64
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M
basedir=c:/wamp/bin/mysql/mysql5.5.24
log-error=c:/wamp/logs/mysql.log
datadir=c:/wamp/bin/mysql/mysql5.5.24/data
Or perhaps I should take a look in the php.ini file? Also, I am running the wamp server on Windows 7.
Please check this steps:
http://blogs.msdn.com/b/sql_protocols/archive/2009/03/09/understanding-the-error-an-operation-on-a-socket-could-not-be-performed-because-the-system-lacked-sufficient-buffer-space-or-because-a-queue-was-full.aspx
Mine problem was the same, first solution listed there helped!

Periodic MySQL lockup when Wordpress is under heavy load

I have a MySQL 5.1.61 database running behind two load balanced Apache webservers hosting a fairly busy (100K uniques per day) Wordpress sites. I'm caching with Cloudflare, W3TC, and Varnish. Most of the time, the database server handles traffic very well. "show full processlist" shows 20-40 queries at any given time, with most being in the sleep state.
Periodically, though (particularly when traffic spikes or when a large number of comments are cleared), MySQL stops responding. I'll find 1000-1500 queries running, many "sending data", etc. No particular query seems to be straining the database (they're all standard Wordpress queries), but it just seems like the simultaneous volume of requests causes all queries to hang up. I'm (usually) still able to log in, to run "show full processlist", or other queries, but the 1000+ queries already in there just sit. The only solution seems to be to restart mysql (sometimes violently via kill -9 if I can't connect).
All tables are innodb, server has 8 cores, 24GB RAM, plenty of disk space, and the following is my my.cnf:
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
port=3306
skip-external-locking
skip-name-resolve
user=mysql
query_cache_type=1
query_cache_limit=16M
wait_timeout = 300
query_cache_size=128M
key_buffer_size=400M
thread_cache_size=50
table_cache=8192
skip-name-resolve
max_heap_table_size = 256M
tmp_table_size = 256M
innodb_file_per_table
innodb_buffer_pool_size = 5G
innodb_log_file_size=1G
#innodb_commit_concurrency = 32
#innodb_thread_concurrency = 32
innodb_flush_log_at_trx_commit = 0
thread_concurrency = 8
join_buffer_size = 256k
innodb_log_file_size = 256M
#innodb_concurrency_tickets = 220
thread_stack = 256K
max_allowed_packet=512M
max_connections=2500
# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).
old_passwords=1
#2012-11-03
#attempting a ram disk for tmp tables
tmpdir = /db/tmpfs01
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
Any suggestions how I can potentially improve MySQL config, or other steps to maintain database stability under heavy load?
Like has been said, think outside the box and do sone rooting around why these queries are slow or somehow hung. An oldie but a good source of problems even for (supposedly;) intelligent system engineers is load balancing causing issues across webserver or database sessions. With all that caching and load balancing going on, are you sure everything is always connecting end-to-end as intended?
I agree with alditis & Bjoern
I'm pretty noobish with mysql but running mysqltuner can reveal some config optimisations based on recent queries of the DB https://github.com/rackerhacker/MySQLTuner-perl
And if possible store the DB files on a physically separate partition from the OS, the OS can consume IO which slows the DB. Like with Bjoern's logrotate issue.
First have a look at basic system behavior at the moment of problems. Use both vmstat and iostat if you can find any issues. See if the system starts swapping (pi,po columns in vmstat) and if lots of IO is happening. This is the first step in debugging your problem.
Another source of useful information is SHOW INNODB STATUS. See for http://www.mysqlperformanceblog.com/2006/07/17/show-innodb-status-walk-through/ on how to interpret the output.
It might be that at a certain point in time your writes are killing read performance because they flush the query cache.