Configuring MySQL server for large amount of queries - mysql

I have deployed 64 core 300 GB+ RAM Amazon server and installed virtualmin on it. This parent server is to be used as a database server. It stores a Laravel jobs queue with more than 2,000,000 jobs.
I am having trouble configuring MySql server in the parent server . Currently the parent server is connected to AWS Auto Scaling Group with many child servers. The child servers in Scaling Group read data from parent server, process the data and store the result back in parent server. Each server, on average, completes 3 jobs from the parent server.
I want to connect about 1000 child server. The problem arises when there are more than 500 server connected. At that time, the MYSql server in parent server becomes very slow. The child server receive data very slowly. It doesnt crash or give connection limit error.
I have tried various variable settings and increased limits, but so far I am unable to solve the issue, My current /etc/my.cnf config is as below:
symbolic-links=0
innodb_file_per_table = 1
myisam_sort_buffer_size = 64M
read_rnd_buffer_size = 32M
net_buffer_length = 12M
read_buffer_size = 128M
sort_buffer_size = 128M
table_open_cache = 64
max_allowed_packet = 5M
key_buffer_size = 512M
max_connections = 100000
innodb_buffer_pool_size=64G
tmp_table_size= 4095M
max_heap_table_size = 20G
Rest of the settings are default.

myisam_sort_buffer_size = 64M
key_buffer_size = 512M
These settings are for MyISAM only. You shouldn't be using MyISAM. It's not the '90s any more.
table_open_cache = 64
This is absurdly low. It is almost certainly why everything grinds to a halt when you have many machines connected.
tmp_table_size= 4095M
max_heap_table_size = 20G
Are you really going to have temporary tables of anywhere near that size?
innodb_buffer_pool_size=64G
How big is your data? If you are anywhere near justifying 300GB of RAM, I presume you must have about a terabyte of data. Unless you have much less data than the server size would imply, this should probably be more like 250GB (and tmp and heap table sizes should be much smaller).
sort_buffer_size = 128M
This is absurdly high. sort_buffer_size is one of many settings that you almost certainly shouldn't be touching.
If you aren't 100% sure you know what each setting does, you should leave it at defaults. What you have here is guaranteed to completely cripple your immensely powerful server.

Related

Improve MySQL performance with O_DIRECT

I want to increase the performance of MySQL. So I have done the configuration level changes to MySQL. I used innodb_flush_method = O_DIRECT, but insert rate is not increasing much. Normally, insertion rate is 650 inserts/sec. How do I know weather O_DIRECT is working properly.
I am using Ubuntu 14.04.1 server and MySQL v5.6. CPU Memory and Disk I/O rates are normal (I use RAID, 16 GB RAM, 8 CPU cores) I use WSO2 CEP for insertion. I have implement that part and measured using MySQL workbench. But I couldn't get much more performance though I increase the insertion rate through wSO2 CEP.
I have used following my.cnf.
my.cnf
[mysqld]
innodb_buffer_pool_size = 9G
query_cache_size = 128M
innodb_log_file_size = 1768M
innodb_flush_log_at_trx_commit = 0
innodb_io_capacity = 1000
innodb_flush_method = O_DIRECT
max_heap_table_size = 536870912
innodb_lock_wait_timeout = 1
max_connections = 400
sort_buffer_size = 128M
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
skip-host-cache
skip-name-resolve
event_scheduler=on
In this case if you are using Event tables, older CEP/siddhi version does not perform batch insertions.. That could be the cause for above.. In latest SNAPSHOT source (of Siddhi) we have fixed this.. And you can gain considerably good numbers in next release..

MariaDB Galera cluster servers running at 100% CPU and load rising

I have a Drupal application which has been running on a single MySQL database server for 12 months, and has been performing relatively well (apart from peak load events). We needed to be able to support much higher spikes than the current DB server allowed, and at 32GB there was not much gain to be had from simply vertically scaling the single DB server.
We decided to set up a new MariaDB Galera cluster with 2x 32GB instances. We matched the configuration as far as possible with the soon-to-be-obselete DB server.
After migrating to the new database servers, we noticed that the CPU usage on those instances was constantly at 100%, and load was steadily increasing. Over the course of 1 hour, load average went from 0.1 to 150.
Initially we thought it might have something to do with the synchronisation between servers, but even with 1 server turned off and no sync occurring the it was still maxing out CPU as long as the web application was making requests to it.
After a lot of experimentation I found that reducing a few of the configuration options had a profound effect on the CPU usage and load. After making the below changes, the load average has stabilised between 4 and 6 on both instances.
The questions
What are some possible reasons for such a dramatic difference in CPU usage between the old and new servers, despite essentially migrating the configuration from the old server?
Load is currently hovering between 4 and 6 (and this is a low traffic period for our website). What should I be looking at to try and reduce this value, and ensure that when the site gets hit with some real traffic it wont fall over?
Config changes
innodb_buffer_pool_instances
Original value: 500 (there are 498 tables total in all databases)
New value: 92
table_cache
Original value: 8
New value: 4
max_connections
Original value: 1000
New value: 400
Current configuration
Here is the full configuration file from one of the servers /etc/mysql/my.cnf
[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock
[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
nice = 0
[mysqld]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
query_cache_type=1
bind-address=0.0.0.0
max_connections = 400
wait_timeout = 600
key_buffer_size = 16M
max_allowed_packet = 16777216
max_heap_table_size = 512M
table_cache = 92
thread_stack = 196608
thread_cache_size = 8
myisam-recover = BACKUP
query_cache_limit = 1048576
query_cache_size = 128M
expire_logs_days = 10
general_log = 0
max_binlog_size = 10485760
server-id = 0
innodb_file_per_table
innodb_buffer_pool_size = 25G
innodb_buffer_pool_instances = 4
innodb_log_buffer_size = 8388608
innodb_additional_mem_pool_size = 8388608
innodb_thread_concurrency = 16
net_buffer_length = 16384
sort_buffer_size = 2097152
myisam_sort_buffer_size = 8388608
read_buffer_size = 131072
join_buffer_size = 131072
read_rnd_buffer_size = 262144
tmp_table_size = 512M
long_query_time = 1
slow_query_log = 1
slow_query_log_file = /var/log/mysql/mysql-slow.log
# Galera Provider Configuration
wsrep_provider=/usr/lib/galera/libgalera_smm.so
#wsrep_provider_options="gcache.size=32G"
# Galera Cluster Configuration
wsrep_cluster_name="xxx"
wsrep_cluster_address="gcomm://xxx.xxx.xxx.107,xxx.xxx.xxx.108"
# Galera Synchronization Congifuration
wsrep_sst_method=rsync
#wsrep_sst_auth=user:pass
# Galera Node Configuration
wsrep_node_address="xxx.xxx.xxx.107"
wsrep_node_name="xxx01"
[mysqldump]
quick
quote-names
max_allowed_packet = 16777216
[isamchk]
key_buffer_size = 16777216
We ended up getting a Percona consultant to assist with this problem. The main issue they identified was a large number of EXPLAIN queries were being executed. Turns out this was some debugging code that was left enabled (devel.module query logging for drupal devs). Disabling this saw CPU usage fall off a cliff.
There were a number of additional fixes which they recommended we implement.
Add a third node to the cluster to act as an observer and maintain the integrity of the cluster.
Add primary keys to tables that do not have one.
Change MyISAM tables to InnoDB.
Change wsrep_sst_method from rsync to xtrabackup-v2.
Set innodb_log_file_size to 512M.
Set innodb_flush_log_at_trx_commit to 2 as the cluster maintains the integrity of the data.
I hope this information helps anyone who runs into similar issues.
innodb_buffer_pool_instances should not be a function of the number of tables. The manual advocates that each instance be no smaller than 1GB. So, I suggest that even 92 is much too high. But my.cnf says only innodb_buffer_pool_instances = 4??
table_cache = 92
Maybe your comments are messed up? 500 would be more reasonable for table_open_cache. (table_cache is the old name.)
This may be the problem:
query_cache_size = 128M
Whenever a write occurs, all entries in the QC for the table(s) involved are purged from the QC. Recommend no more than 50M. Or, better yet, turn the QC off completely.
You have the slowlog turned on. What does pt-query-digest say are the top couple of queries? (This may be your best way to get a handle on the problem.)

An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full - wamp server

I am creating a browser-based game using PHP,Javascript and HTML5 and I am testing it on a local wamp server. However sometimes I get the following error:
Warning: mysqli::mysqli(): (HY000/2002): An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full.
It flashes on the screen (like a SCREAM error) and then dissapears. It doesn't affect the functionality of the application, but sometimes it can stay up to 5-10s on the screen. I mention that I am using a lot of AJAX requests to operate modifications in the database, the most notable one being the fact that i have a timer that reads the svg map set at 2s. I found the following lines in the my.ini (mysql configuration file). Should I try to modify some of these values?
# The MySQL server
[wampmysqld]
port = 3306
socket = /tmp/mysql.sock
key_buffer = 16M
max_allowed_packet = 1M
table_cache = 64
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M
basedir=c:/wamp/bin/mysql/mysql5.5.24
log-error=c:/wamp/logs/mysql.log
datadir=c:/wamp/bin/mysql/mysql5.5.24/data
Or perhaps I should take a look in the php.ini file? Also, I am running the wamp server on Windows 7.
Please check this steps:
http://blogs.msdn.com/b/sql_protocols/archive/2009/03/09/understanding-the-error-an-operation-on-a-socket-could-not-be-performed-because-the-system-lacked-sufficient-buffer-space-or-because-a-queue-was-full.aspx
Mine problem was the same, first solution listed there helped!

my.cnf MySQL perfomance for 5GB MYISAM table

I Have a dedicated server - Intel Xeon L5320 with 8GB of RAM and 2 x 500GB 7200RMP HDD
I need to optimize mysql to cope with a large 5Gb MyISAM table + around 25 - 30 smaller databases currently it looks like this:
key_buffer = 3G
thread_cache_size = 16
table_cache = 8192
query_cache_size = 512M
As it is the server really struggles and I get continues tmp disk full warnings could you please help me out / suggest the best my.cnf configuration for my server and or any other settings changes that would improve performance.
Thanks in advance
I recommend you use mytop and mysqltuner to analyze using mysql resources (RAM and CPU).
Too enable the option to log slow queries:
log_slow_queries = /var/log/mysql/mysql-slow.log
long_query_time = 3
And check out this post about ntpd service:
MySQL high CPU usage
Finally I leave you in a setting that I have a dedicated server for a high rate of transactions.
max_allowed_packet=16M
key_buffer_size=8M
innodb_additional_mem_pool_size=10M
innodb_buffer_pool_size=512M
join_buffer_size=40M
table_open_cache=1024
query_cache_size=40M
table_definition_cache=256
innodb_additional_mem_pool_size=10M
key_buffer_size=16M
max_allowed_packet=32M
max_connections = 300
query_cache_limit = 10M
log_slow_queries = /var/log/mysql/mysql-slow.log
long_query_time = 3
Greetings.
If /tmp is filling up, you are running some large, inefficient queries somewhere which are falling back to FILESORT. Well-written, efficient queries should typically not need this -- turn on slow query logging (if it isn't already) and check the log to see what needs optimizing.

mysql always using maximum connection

I have LAMP server having 4 core CPU and 32 GB RAM.We are running a large website on it. I have following issues now in my server.
When I use Mysqlreport tool to monitor the mysql server i am always seeing the connection usage as below. And the users reporting connection issues in the website.
_
Connections _______________________________
Max used 251 of 250 %Max: 100.40 Total 748.71k 3.5/s
But when I use "show process list" command it will output nothing. We are using MyISAM engine for all our DBs.
My Mysql Config File is pasted below:
######################
[mysqld]
max_connections = 250
set-variable=local-infile=0
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
skip-name-resolve
skip-bdb
wait_timeout = 60
thread_cache_size = 100
table_cache = 1024
key_buffer = 384M
log_slow_queries=/mysql-log/mysql-slow.log
query-cache-size=512M
query-cache-type=1
query_cache_limit=20M
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
#
Who is using my Mysql connection pool? How can I find it?
And i have another issue.
Sometimes the Load average goes beyond 4-8 range. See below:
13:40:02 up 2 days, 10:39, 0 users, load average: 5.03, 1.68, 0.93
At that time i can see mysql is the top consumer of the CPU. Is there any optimization needed in mysql Server?
Please reply to my above two queries.
Thanks in advance,
Aruns
I noticed that you are already using MySQL Query Cache.
Have you tried using MySQL workbench to connect to your MySQL database? It offers a graphical way of checking out your MySQL database, including process list.
If you are behind a firewall, try using
show full processlist
However, I think this will not really help.
I would assume that you are using PHP - MySQL to serve out web pages. So this means that you will mostly find that the MySQL connections are made from PHP. To see how many apache threads are running at one time. You can try:
ps aux |grep httpd |wc -l
If you have many more threads on apache, connecting to MySQL, then you know you have a problem.
You mentioned that you have a busy site, therefore, therefore the real answer to your problem is to cache your content, probably using memcached. The idea is to reduce the hits to your MySQL server. Your server has plenty of RAM and perfect for memcached.
This idea is reuse the content for a certain amount of time depending if the content needs freshness:
<?php
$cachedContent = $memcache->get("cacheKey");
if (!$cachedContent) {
// retrieve from MySQL and formulate HTML here
// you can use obstart so that you can reuse your previous code
ob_start();
// your previous code here
// echo or
?>
<div>
previous generated content from mysql
</div>
<?php
// now cachedContent contains your previous generated HTML
$cachedContent = ob_get_contents();
// set content into memcache
$memcache->set("cacheKey", $cachedContent, false, 1800);
// clear the buffer
ob_end_flush();
}
echo $cachedContent;
?>
You need to find which contents to cache first. Good places to start are:
Inefficient bits on the index.php page (I assume this will be one of the most hit page)
Check your GA for most hit pages.
Check your MySQL slow queries and cache those contents.
Add the below variables in your my.cnf :
If you are going to use only MyISAM engine below variables will give the best result according to your Hardware configuration .
max_allowed_packet = 1M
table_open_cache = 250
sort_buffer_size = 2M
thread_stack = 128K
join_buffer_size = 1M
query_cache_limit = 400k
query_cache_size = 300M
key_buffer_size = 5G
read_buffer_size = 2M
read_rnd_buffer_size = 2M
bulk_insert_buffer_size = 8M
myisam_sort_buffer_size = 8M
myisam_max_sort_file_size = 6G
myisam_recover=FORCE,BACKUP
But depends on your DB size and how your application is accessing(fetching) the data, we can modify the above variables.
Reduce the wait_timeout=30.
Regarding this really i dont have any clue :( ..
Connections _________________
Max used 251 of 250 %Max: 100.40 Total 748.71k 3.5/s
SHOW PROCESSLIST\g
SHOULD PROVIDE THE LIST OF PROCESS ARE CONNECTED TO DB IN ANY STATE (SLEEP/READING..... ETC.,)
Add the above variables in your my.cnf and restart the server .