i am running a wordpress website with on VULTR VPS 1GB RAM SSD,
my website has 20000+ posts and now its even slow on 4GB RAM VPS i think this is just for max mysql load right? im just noob in programming, please figure this out for me , how to load my website faster with this 20000+ posts or what to configure in the server ?
You provided very little info so it's impossible to diagnose the problem.
First you should monitor the system: CPU, memory, I/O and check if any of these are close to the limits.
Second you should monitor the database: have you access to the DB server? have you access to any monitoring facilities?
If the performance decreased when the post increased it's possible that the problem is the DB but you must understand what: a missing index? an outdated statistic?
Anyway nothing can be said without a proper monitoring
1GB RAM -- that is tiny by today's standards.
Check for swapping. That is a killer for MySQL.
Which "Engine" are your tables? (Do SHOW CREATE TABLE for a typical table.)
If ENGINE=MyISAM, look in my.cnf for key_buffer_size; it should be something like 50M. (400M for 4GB of RAM)
If ENGINE=InnoDB, look in my.cnf for innodb_buffer_pool_size; it should be something like 150M (1200M for 4GB of RAM) and key_buffer_size should be about 10M.
If your settings are significantly smaller than those, that is likely to be the problem. To double check the settings, do (from phpmyadmin, mysql commandline tool, or wherever):
SHOW VARIABLES LIKE '%buffer%';
Related
I currently have a Cloud based server with the following config.
CentOS 7 64-Bit
CPU:8 vCore
RAM:16 GB
MariaDB/MySQL 5.5.5
Unfortunately, I've inherited a MyISAM database and tables that I have no control to convert to INNODB even though the application performs many writes from many connections. The data is Wordpress Posts with the typical large text and photos.
I'm experimenting with my.cnf config changes and was wondering if the config I've developed here is making use of the resources in the most effecient way. Is there anything glaring I could increase/decrease to squeak out more performance?
key_buffer_size=4G
thread_cache_size = 128
bulk_insert_buffer_size=256M
join_buffer_size=64M
max_allowed_packet=128M
query_cache_limit=128M
read_buffer_size=16M
read_rnd_buffer_size=16M
sort_buffer_size=16M
table_cache=128
tmp_table_size=128M
This will depend entirely on the type of data you are storing, the structure and size of your tables and the type of usage your database has. Not to mention the amount of available RAM and the type of disks your server has.
The best recommendation, if you have shell access to the server (which I assume you must, otherwise you couldn't change my.cnf) is to download the mysqltuner script from major.io
Run this script as a user with privileges to access your database, and preferably with root privileges on mysql too (the ideal is to run it under sudo or root) and it will analyse your database access since mysql's last restart, and then give you recommendations to change the options in my.cnf
It isn't perfect, but it'll get you much further, and more quickly, than anyone on here trying guess what values would be appropriate for your use case.
And, while not trying to pre-empt the results, I wouldn't be surprised if mysqltuner recommends that you drastically increase the size of your join buffer, table_cache and query_cache_limit.
I have setup with Linux, Debian Jessie with Mysql 5.7.13 installed.
I have set following settings in
my.cnf: default_storage_engine= innodb, innodb_buffer_pool_size= 44G
When I start MySQL I manually set max_connections with SET GLOBAL max_connections = 1000;
Then I trigger my loadtest that sends a lot of traffic to the DB server which mostly consists of slow/bad queries.
The result I expected was that I would reach close to 1000 connections but somehow MySQL limits it to 462 connections and I can not find the setting that is responsible for this limit. We are not even close to maxing out the CPU or Memory.
If you have any idea or could point me in a direction where you think the error might be it would be really helpful.
What loadtest did you use? Are you sure that it can utilize about thousands of connections?
You may maxing out your server resources in the disk IO area, especially if you're talking about lot of slow/bad queries. Did you check for disk utilization on your server?
Even if your InnoDB pool size is large your DB still need to read your DB to the cache first, and if your entire DB is large it will not help you.
I can recommend you to perform such a test once more time and track your disk performance during loadtest using iostat or iotop utility.
Look here for more examples of the server performance troubleshooting.
I found the issue, it was du to limitation of Apache server, there is a "hidden" setting inside /etc/apache2/mods-enabled/mpm_prefork.conf which will overwrite setting inside /etc/apache2/apache2.conf
Thank you!
I am currently using MySQL. I have noticed several times my front application runs slow after some usage. when i checked server status in MySQL workbench. I have noticed that innodb buffer usage was going to 100% . so I increased parameter innodb_buffer_pool_size to 1G in my.ini file of xampp. but innodb is not flushing the buffer and application runs slow after some time. is there any other parameters to change as-well?
consider using a size for innodb_buffer_pool_size of 70%-80% of available ram. Depending on how big your dataset is, you should increase the size.
This question already has answers here:
MySQL maximum memory usage
(7 answers)
Closed 9 years ago.
I have MySQL installed on a VPS Windows 2008 Web Server with 1GB of memory.
Is there a way of setting a maximum memory usage limit for MYSQL.
I have workbench installed if it can be done through that.
Many Thanks
John
If you really want to impose a hard limit, you could do so, but you'd have to do it at the OS level as there is no built-in setting. In linux, you could utilize ulimit, but you'd likely have to modify the way MySQL starts in order to impose this.
The best solution is to tune your server down, so that a combination of the usual MySQL memory settings will result in generally lower memory usage by your MySQL installation. This will of course have a negative impact on the performance of your database, but some of the settings you can tweak in my.ini are:
key_buffer_size
query_cache_size
query_cache_limit
table_cache
max_connections
tmp_table_size
innodb_buffer_pool_size
To Plot memory usage in Linux you can easily use a script
while true
do
date >> ps.log
ps aux | grep mysqld >> ps.log
sleep 60
done
if you are trying to Check for Table Cache Related Allocations
Run “FLUSH TABLES” and see whenever memory usage goes down.
Note though because of how memory is allocated from OS you might not
see “VSZ” going down. What you might see instead is flushing tables
regularly or reducing table cache reduces memory consumption to be
withing the reason.
It is often helpful to check how much memory Innodb has allocated. In fact this is often one of the first things I do as it is least intrusive. Run SHOW ENGINE INNODB STATUS and look for memory information block
Recently we changed app server of our rails website from mongrel to passenger [with REE and Rails 2.3.8]. The production setup has 6 machines pointing to a single mysql server and a memcache server. Before each machine had 5 mongrel instance. Now we have 45 passenger instance as the RAM in each machine is 16GB with 2, 4 core cpu. Once we deployed this passenger set up in production. the Website became so slow. and all the request starting to queue up. And eventually we had to roll back.
Now we suspect that the cause should be the increased load to the Mysql server. As before there where only 30 mysql connection and now we have 275 connection. The mysql server has the similar set up as our website machine. bUt all the configs were left to the defaul limit. The buffer_pool_size is only 8 mb though we have 16GB ram. and number of Concurrent threads is 8.
Will this increased simultaneous connection to mysql would have caused mysql to respond slowly than when we had only 30 connections? If so, how can we make mysql perform better with 275 simultaneous connection in place.
Any advice greatly appreciated.
UPDATE:
More information on the mysql server:
RAM : 16GB CPU: two processors each having 4 cores
Tables are innoDB. with only default innodb config values.
Thanks
An idle MySQL connection uses up a stack and a network buffer on the server. That is worth about 200 KB of memory and zero CPU.
In a database using InnoDB only, you should edit /etc/sysctl.conf to include vm.swappiness = 0 to delay swapping out processes as long as possible. You should then increase innodb_buffer_pool_size to about 80% of the systems memory assuming a dedicated database server machine. Make sure the box does not swap, that is, VSIZE should not exceed system RAM.
innodb_thread_concurrency can be set to 0 (unlimited) or 32 to 64, if you are a bit paranoid, assuming MySQL 5.5. The limit is lower in 5.1, and around 4-8 in MySQL 5.0. It is not recommended to use such outdated versions of MySQL in a machine with 8 or 16 cores, there are huge improvements wrt to concurrency in MySQL 5.5 with InnoDB 1.1.
The variable thread_concurrency has no meaning inside a current Linux. It is used to call pthread_setconcurrency() in Linux, which does nothing. It used to have a function in older Solaris/SunOS.
Without further information, the cause for your performance problems cannot be determined with any security, but the above general advice may help. More general advice geared at my limited experience with Ruby can be found in http://mysqldump.azundris.com/archives/72-Rubyisms.html That article is the summary of a consulting job I once did for an early version of a very popular Facebook application.
UPDATE:
According to http://pastebin.com/pT3r6A9q , you are running 5.0.45-community-log, which is awfully old and does not perform well under concurrent load. Use a current 5.5 build, it should perform way better than what you have there.
Also, fix the innodb_buffer_pool_size. You are going nowhere with only 8M of pool here.
While you are at it, innodb_file_per_table should be ON.
Do not switch on innodb_flush_log_at_trx_commit = 2 without understanding what that means, but it may help you temporarily, depending on your persistence requirements. It is not a permanent solution to your problems in any way, though.
If you have any substantial kind of writes going on, you need to review the innodb_log_file_size and innodb_log_buffer_size as well.
If that installation is earning money, you dearly need professional help. I am no longer doing this as a profession, but I can recommend people. Contact me outside of Stack Overflow if you want.
UPDATE:
According to your processlist, you have very many queries in state Sending data. MySQL is in this state when a query is being executed, that is, the main interior Join Loop/Query Execution loop is busy. SHOW ENGINE INNODB STATUS\G will show you something like
...
--------------
ROW OPERATIONS
--------------
3 queries inside InnoDB, 0 queries in queue
...
If that number is larger than say 4-8 (inside InnoDB), 5.0.x is going to have trouble. 5.5.x will perform a lot better here.
Regarding the my.cnf: See my previous comments on your InnoDB. See also my comments on thread_concurrency (without innodb_ prefix):
# On Linux, this does exactly nothing.
thread_concurrency = 8
You are missing all innodb configuration at all. Assuming that you ARE using innodb tables, you are not performing well, no matter what you do.
As far as I know, it's unlikely that merely maintaining/opening the connections would be the problem. Are you seeing this issue even when the site is idle?
I'd try http://www.quest.com/spotlight-on-mysql/ or similar to see if it's really your database that's the bottleneck here.
In the past, I've seen basic networking craziness lead to behaviour similar to what you describe - someone had set up the new machines with an incorrect submask.
Have you looked at any of the machine statistics on the database server? Memory/CPU/disk IO stats? Is the database server struggling?