I got this server Intel Xeon Quadcore E3-1230v2 with 8GBs of DDR3 RAM Round the clock I see that this server is running out of CPU. It looks badly overloaded. After observing "Daily Process Log" I realized that below process is eating 25% of the CPU resources & there were three such processes (technically errors). Below is the process (error):
/usr/sbin/mysqld --basedir/ --datadir/var/lib/mysql --usermysql --log-error/var/lib/mysql/server.yacart.com.err --open-files-limit16384 --pid-file/var/lib/mysql/server.yacart.com.pid
As visible in the above error, It appears something is wrong with open-files-limit16384, I tried increasing open-files-limit in my.cnf to 16384 but in vain. Below is how my my.cnf now looks like:
[mysqld]
innodb_file_per_table=1
local-infile=0
open_files_limit=9978
Can anyone advise me a good configuration for my my.cnf ? Which would help me get rid of CPU overload?
There is a GoogleBot like robot script I am running in slave servers to mine data from internet. Its crawling the entire internet. When I shutdown this script, everything gets in order. I wonder if there is a fix I could apply to this script?
This robot program has got about 40 databases, each with a size of 50 - 800 MBs, total DB size of about 14 GBs so far & I expect this to shoot upto 500 GBs in future. At one point (whole day long) only ONE DB is used. Next day, I use next DB & so on. I was thinking of increasing RAM once the biggest DB reaches 2 GBs. Currently RAM does not seem to be an issue at all.
Thanks in advance for any help you guys can offer.
regards,
Sam
If you have WHM, look for this under Server Configuration >> Tweak Settings >> SQL
** Let cPanel determine the best value for your MySQL open_files_limit configuration ? [?]
cPanel will adjust the open_files_limit value during each MySQL restart depending on your total number of tables.
Related
I have mysql 5.6.36 database where the size is ~35G running on CentOS 7.3 with 48G of RAM.
[UPDATE 17-08-06] I will update relevant information here.
I am seeing that my server runs out of memory and crashes even with ~48G of RAM. I could not keep it running on 24G, for example. A DB this size should be able to run on much less. Clearly, I a missing something fundamental.
[UPDATE: 17-08-05] By crashes, I mean mysqld stops and restarts with no useful information in the log, other than restarting from a crash. Also, with all this memory, I got this error during recovery:
[ERROR] InnoDB: space header page consists of zero bytes in tablespace ./ca_uim/t_qos_snapshot.ibd (table ca_uim/t_qos_snapshot)
The relevant portion of my config file looks like this [EDITED 17-08-05 to add missing lines]:
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
lower_case_table_names = 1
symbolic-links=0
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
max_allowed_packet = 32M
max_connections = 300
table_definition_cache=2000
innodb_buffer_pool_size = 18G
innodb_buffer_pool_instances = 9
innodb_log_file_size = 1G
innodb_file_per_table=1
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
It was an oversight to use file per table, and I need to change that (I have 6000 tables, and most of those are partitioned).
After running for a short while (one hour), mytop shows this:
MySQL on 10.238.40.209 (5.6.36) load 0.95 1.08 1.01 1/1003 8525 up 0+01:31:01 [17:44:39]
Queries: 1.5M qps: 283 Slow: 22.0 Se/In/Up/De(%): 50/07/09/01
Sorts: 27 qps now: 706 Slow qps: 0.0 Threads: 118 ( 3/ 2) 43/28/01/00
Key Efficiency: 100.0% Bps in/out: 76.7k/176.8k Now in/out: 144.3k/292.1k
And free shows this:
# free -h
total used free shared buff/cache available
Mem: 47G 40G 1.5G 8.1M 5.1G 6.1G
Swap: 3.9G 508K 3.9G
Top shows this:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2010 mysql 20 0 45.624g 0.039t 9008 S 95.0 84.4 62:31.93 mysqld
How can this be? Is this related file per table? The entire DB could fit in memory. What am I doing wrong?
In your my.cnf (MySQL configuration) file:
Add a setting in [mysqld] block
[mysqld]
performance_schema = 0
For MySQL 5.7.8 onwards, you will have to add extra settings as below:
[mysqld]
performance_schema = 0
show_compatibility_56 = 1
NOTE: This will cut your Memory usage to more than 50%-60%. "show_compatibility_56" is optional, for some cases it works, better to check it once added to the config file.
Well, I resolved the issue. I appreciate all the insight from those who responded. The solution is very strange, and I cannot explain why this solves the problem, but it does. What I did was add the following line to my.cnf:
log_bin
You may, in addition, need to add the following:
expire_logs_days = <some number>
We have seen at least one instance where the logs accumulated and filled up a disk. The default is 0 (no auto removal). https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_expire_logs_days
Results are stored and fed from memory and given that you're running 283 per second, there's probably a lot of data at any given moment being dished out.
I would think that you are doing a good job squeezing a lot out of that server. Consider the tables are one thing, then the schema involved for 6000 tables, plus the fact that you're pulling 283 queries per second against a 35 GB database and that those results are held in memory while they are being served. The rest of us might as well learn from you.
Regarding the stopping and restarting of MySQL
[ERROR] InnoDB: space header page consists of zero bytes in tablespace ./ca_uim/t_qos_snapshot.ibd (table ca_uim/t_qos_snapshot)
Your might consider trying
innodb_flush_method=normal which is recommended here and here, but I can't promise it will work.
I would check table_open_cache. You have a lot of tables and it is clearly reflected in avg opened files per second: about 48 when a normal value is between 1 and 5.
That is confirmed by the values of Table_open_cache_misses and Table_open_cache_overflows,
ideally those values should be cero. Those means failed attempts to use cache and in consequence wasted memory.
You should try increasing it at least to 3000 and see results.
Since you are on CentOS:
I would double check that ulimit it is unlimited or about 20000 for your 6000 tables.
Consider set swappiness to 1. I think it is better to have some swapps (while observing) than crashes.
Hoping you are a believer in ONLY one change at a time so you can track progress for a configuration reason. 2017-08-07 about 17:00 SHOW GLOBAL VARIABLES indicates innodb_buffer_pool_size is 128M. Change in my.cnf to 24G, shutdown/restart when permitted, please.
A) max_allowed_packet_size at 1G is likely what you meant in your configuration, considering on 8/7/2017 your remote agents are sending 1G packets for processing on this equipment. How are remote agents managed in terms of scheduling their sending of data to prevent exhausting all 48G on this host for this single use of memory? Status indicates bytes_received on 8/6/2017 was 885,485,832 from max_used_connections of 86 in first 1520 seconds of uptime.
B) innodb_io_capacity at 200 is likely a significant throttle to your possible IOPS, we run here at 700. sqlio.exe utility was used to guide us in this direction.
C) innodb_io_capacity_max should be likely be adjusted as well.
D) thread_cache_size of 11, consider going to 128.
E) thread_concurrency of 10, consider going to 30.
F) I understand the length of process-list.txt in the number of Sleep ID's is likely caused by the use of persistent connections. The connection is just waiting for some additional activity from the client for an extended period of time. 8/8/2017
G) STATUS Com_begin count is usually very close to Com_commit count, not in your case. 8/8/2017 Com_begin was 2 and Com_commit was 709,910 for 11 hours of uptime.
H) It would probably be helpful to see just 3 minutes of a General Log, if possible.
Keep me posted on your progress.
Please enable the MySQL error log in your usual configuration.
When MySQL crashes, protect the error log before restarting, and add last error-log available to your Question, please. It should have a clue WHY MySQL is failing.
Running the 'small' configuration will run like a dog, when supporting the volume of activity reported by SHOW GLOBAL STATUS.
Please get back to your usual production configuration.
I am looking at your provided details and will have some tuning suggestions in next 24 hours. It appears most of the process-list activities are related to replication. Would that be true?
Use of www.mysqlcalculator.com would be a quick way to get a brain check on about a dozen memory consumption factors in less than 2 minutes.
118 active threads may be reasonable but would seem to be causing extreme context switching trying to answer 118 questions simultaneously.
Would love to see your SHOW GLOBAL STATUS and SHOW GLOBAL VARIABLES, if you could get them posted.
I have setup with Linux, Debian Jessie with Mysql 5.7.13 installed.
I have set following settings in
my.cnf: default_storage_engine= innodb, innodb_buffer_pool_size= 44G
When I start MySQL I manually set max_connections with SET GLOBAL max_connections = 1000;
Then I trigger my loadtest that sends a lot of traffic to the DB server which mostly consists of slow/bad queries.
The result I expected was that I would reach close to 1000 connections but somehow MySQL limits it to 462 connections and I can not find the setting that is responsible for this limit. We are not even close to maxing out the CPU or Memory.
If you have any idea or could point me in a direction where you think the error might be it would be really helpful.
What loadtest did you use? Are you sure that it can utilize about thousands of connections?
You may maxing out your server resources in the disk IO area, especially if you're talking about lot of slow/bad queries. Did you check for disk utilization on your server?
Even if your InnoDB pool size is large your DB still need to read your DB to the cache first, and if your entire DB is large it will not help you.
I can recommend you to perform such a test once more time and track your disk performance during loadtest using iostat or iotop utility.
Look here for more examples of the server performance troubleshooting.
I found the issue, it was du to limitation of Apache server, there is a "hidden" setting inside /etc/apache2/mods-enabled/mpm_prefork.conf which will overwrite setting inside /etc/apache2/apache2.conf
Thank you!
I don't know what's going on. My server has been fine for probably a year. Now I'm having a severe problem with MariaDB/MySQL. The DB server keeps crashing. When it does and I bring it back online I get errors, several tables are marked crashed and I have to repair them. Here are the server specs...
CloudLinux Server release 6.6 installed on Centos 6.5 (x64)
WHM/Cpanel 11.50.1 Build 1 (Current)
MariaDB 10.0.21
RAM: 3,820MB (3750MB+ in use)
Swap: 1,023MB (1,023MB in use)
4 Cores (Low idle load)
Available Disk Space: 26GB
I suspect it has to do with memory. Here's a memory alert I get in WHM:
Here's what I get when I try to visit a web site on my server that uses MySQL (As expected):
Warning: mysql_connect(): Connection refused in /home/mysite/public_html/index.php on line 19
Unable to connect to server.
Here's a link to the main error log of my database server (Too much to post here): http://wikisend.com/download/182056/proton.myserver.com.err.txt
This is what happens when I restart my database server from WHM. Each time I restart the db server, random tables are marked as crashed. Sometimes a lot of tables, sometimes just a few and then I have to repair them:
Here is the contents of the /etc/my.cnf file:
root#proton [~]# cat /etc/my.cnf
[mysqld]
default-storage-engine=MyISAM
innodb_file_per_table=1
max_allowed_packet=268435456
open_files_limit=10000
innodb_buffer_pool_size=123731968
The only thing I've tried to fix this is setting this option in WHM:
I only have a handful of sites on the server. Any help is greatly appreciated.
SHOW VARIABLES LIKE '%buffer%';
Do you have other products running in the same VM/server? How much of the 3750MB are they using? Consider increasing RAM as a quick fix. Otherwise, lets look for what is chewing up RAM.
You are probably no using any InnoDB tables? If not then change this to 0:
innodb_buffer_pool_size=123731968
For MyISAM, the most important factor is key_buffer_size; it should be no more than about 500M for your case.
What is WHM?
Abrupt stops of mysql (for any reason) leads to the need to REPAIR MyISAM tables ("marked crashed"). (Consider moving to InnoDB to avoid this recurring nuisance.)
we have a web application (racktables) that's giving us grief on our production box. whenever users try to run a search, it gives the following error:
Pdo exception: PDOException
SQLSTATE[HY000]: General error: 5 Out of memory (Needed 2057328 bytes) (HY000)
I cannot recreate the issue on our backup server. The servers match except for the fact that in production we have 16GB RAM and our backup we have 8GB. It's a moot point though because both are running 32 bit os's and so are only using 4GB of RAM. we also have set up a swap partition...
Here's what i get back from the "free -m" command in production:
prod:/etc# free -m
total used free shared buffers
Mem: 3294 1958 1335 0 118
-/+ buffers: 1839 1454
Swap: 3817 109 3707
prod:/etc#
I've checked to make sure that my.cnf on both boxes match. The database from production was replicated onto the backup server... so the data matches as well.
I guess our options are to:
A) convert the o/s to 64 bit so we can use more RAM.
B) start tweaking some of the innodb settings in my.cnf.
But before I try either A or B, I wanted to know if there's anything else I should compare between the two servers... seeing how the backup is working just fine. There must be a difference somewhere that we are not accounting for.
Any suggestions would be appreciated.
I created a script to simulate load on the backup server and was able to then to recreate the out of memory error message.
In the end, i added the "join_buffer_size" setting to my.cnf and set it 3 MBs. That has resolved the issue.
ps. I downloaded and ran tuning-primer.sh as well as mysqltuner.pl to narrow down the issues.
Recently we changed app server of our rails website from mongrel to passenger [with REE and Rails 2.3.8]. The production setup has 6 machines pointing to a single mysql server and a memcache server. Before each machine had 5 mongrel instance. Now we have 45 passenger instance as the RAM in each machine is 16GB with 2, 4 core cpu. Once we deployed this passenger set up in production. the Website became so slow. and all the request starting to queue up. And eventually we had to roll back.
Now we suspect that the cause should be the increased load to the Mysql server. As before there where only 30 mysql connection and now we have 275 connection. The mysql server has the similar set up as our website machine. bUt all the configs were left to the defaul limit. The buffer_pool_size is only 8 mb though we have 16GB ram. and number of Concurrent threads is 8.
Will this increased simultaneous connection to mysql would have caused mysql to respond slowly than when we had only 30 connections? If so, how can we make mysql perform better with 275 simultaneous connection in place.
Any advice greatly appreciated.
UPDATE:
More information on the mysql server:
RAM : 16GB CPU: two processors each having 4 cores
Tables are innoDB. with only default innodb config values.
Thanks
An idle MySQL connection uses up a stack and a network buffer on the server. That is worth about 200 KB of memory and zero CPU.
In a database using InnoDB only, you should edit /etc/sysctl.conf to include vm.swappiness = 0 to delay swapping out processes as long as possible. You should then increase innodb_buffer_pool_size to about 80% of the systems memory assuming a dedicated database server machine. Make sure the box does not swap, that is, VSIZE should not exceed system RAM.
innodb_thread_concurrency can be set to 0 (unlimited) or 32 to 64, if you are a bit paranoid, assuming MySQL 5.5. The limit is lower in 5.1, and around 4-8 in MySQL 5.0. It is not recommended to use such outdated versions of MySQL in a machine with 8 or 16 cores, there are huge improvements wrt to concurrency in MySQL 5.5 with InnoDB 1.1.
The variable thread_concurrency has no meaning inside a current Linux. It is used to call pthread_setconcurrency() in Linux, which does nothing. It used to have a function in older Solaris/SunOS.
Without further information, the cause for your performance problems cannot be determined with any security, but the above general advice may help. More general advice geared at my limited experience with Ruby can be found in http://mysqldump.azundris.com/archives/72-Rubyisms.html That article is the summary of a consulting job I once did for an early version of a very popular Facebook application.
UPDATE:
According to http://pastebin.com/pT3r6A9q , you are running 5.0.45-community-log, which is awfully old and does not perform well under concurrent load. Use a current 5.5 build, it should perform way better than what you have there.
Also, fix the innodb_buffer_pool_size. You are going nowhere with only 8M of pool here.
While you are at it, innodb_file_per_table should be ON.
Do not switch on innodb_flush_log_at_trx_commit = 2 without understanding what that means, but it may help you temporarily, depending on your persistence requirements. It is not a permanent solution to your problems in any way, though.
If you have any substantial kind of writes going on, you need to review the innodb_log_file_size and innodb_log_buffer_size as well.
If that installation is earning money, you dearly need professional help. I am no longer doing this as a profession, but I can recommend people. Contact me outside of Stack Overflow if you want.
UPDATE:
According to your processlist, you have very many queries in state Sending data. MySQL is in this state when a query is being executed, that is, the main interior Join Loop/Query Execution loop is busy. SHOW ENGINE INNODB STATUS\G will show you something like
...
--------------
ROW OPERATIONS
--------------
3 queries inside InnoDB, 0 queries in queue
...
If that number is larger than say 4-8 (inside InnoDB), 5.0.x is going to have trouble. 5.5.x will perform a lot better here.
Regarding the my.cnf: See my previous comments on your InnoDB. See also my comments on thread_concurrency (without innodb_ prefix):
# On Linux, this does exactly nothing.
thread_concurrency = 8
You are missing all innodb configuration at all. Assuming that you ARE using innodb tables, you are not performing well, no matter what you do.
As far as I know, it's unlikely that merely maintaining/opening the connections would be the problem. Are you seeing this issue even when the site is idle?
I'd try http://www.quest.com/spotlight-on-mysql/ or similar to see if it's really your database that's the bottleneck here.
In the past, I've seen basic networking craziness lead to behaviour similar to what you describe - someone had set up the new machines with an incorrect submask.
Have you looked at any of the machine statistics on the database server? Memory/CPU/disk IO stats? Is the database server struggling?