I noticed that our server swap is at Swap Used 98.66% (1,973,240 of 2,000,000). Any tips to reducing this. Just for background I have a
CentOs 6.6 Rack Server
1.92 (24 core) processors
48Gig ram
We do some very heavy database (MySql) work with it, which resides on a 240gig SSD. We also do a lot of file writes for example I had to fix a few things today because we were using 99% of 2Tb main drives, we also have 160gb SSD for writing report files. It's typically at 73% ram usage and 300% cpu usage. So any help would be wonderful. Like I said we do a ton of work with it. For example around 5.2Gb of database traffic an hour.
Opps, thought I put this on serverfault, I don't see a way to move it?
Thanks,
MySQL performs terribly when swapping.
48GB -- what is taking that? Is it mostly MySQL? If so, lets look at how to decrease the caches in MySQL to avoid swapping.
If you are using InnoDB, set innodb_buffer_pool_size to about 70% of available ram. And key_buffer_size to 20M.
If you are using MyISAM; well, don't. (I will elaborate if needed.)
73% RAM sounds like you are not really swapping.
300% CPU sounds like you have some non-MySQL applications that are CPU-bound, or you have some slow queries. If the latter, let's see them; we may be able to improve them.
Related
I have updated several MySQL tables to InnoDb. After doing so, MySQL has become sluggish and the hard drive I have the database on is constantly writing, even though my changes have been completed. Peridoically the CPU will get heavy use, 100% on two cores, but whatever is using them is not registering that us in System Monitor (Debian). Reading the database is possible, but slow. I have not tried writing, as it is obviously busy doing something - but I do not know what.
Digging deeper, I have found that I have a very large ibdata1 file, almost 62GB - I have some large tables in InnoDB, including 16, 10, 9, 1.5 and 1.1 GB; and many smaller.
Does anyone have any idea what may be happening here? Or logs I can look at that might shed some light? I have restarted, but when MySQL comes online, the same thing happens, and has been going on for over an hour. Also, would it be a good idea for me to change some InnoDB tables to MyISSAM? Of the large ones, none require InnoDb for transactions, but some of my smaller ones do (under 50MB).
Two most important options to start are innodb_buffer_size and innodb_logfile_size. Set former to be as big as your database, but leave at least 4-6G for the OS.
Optimal logfile size depends on how much you have writes, but something around 256M works for most workloads.
My application's typical DB usage is to read/update on one large table. I wonder if MySQL scales read operations on a single multi-processor machine? How about write operations - are they able to utilize multi-processors?
By the way - unfortunately I am not able to optimize the table schema.
Thank you.
Setup details:
X64, quard core
Single hard disk (no RAID)
Plenty of memory (4GB+)
Linux 2.6
MySQL 5.5
If you're using conventional hard disks, you'll often find you run out of IO bandwidth before you run out of CPU cores. The only way to pin a four core machine is to have a very high performance SSD striped RAID array.
If you're not able to optimize the schema you have very limited options. This is like asking to tune a car without lifting the hood. Maybe you can change the tires or use better gasoline, but fundamental performance gains come from several factors, including, most notably, additional indexes and strategically de-normalizing data.
In database land, 4GB of memory is almost nothing, 8GB is the absolute minimum for a system with any loading, and a single disk is a very bad idea. At the very least you should have some form of mirroring for data integrity reasons.
Dumb question: I have 4 gb of RAM and my dataset is around 500 mb. How can I make sure MySQL/InnoDB is keeping my dataset in RAM?
MySQL Tuning Primer gives you lots info and recommendations regarding your MySQL performance. Keep in mind (and it will warn you), the instance should be running for a period of time to give you accurate feedback.
set the innodb_buffer_pool to 3G - InnoDB will load as much data it can in the buffer pool.
Darhazer is right (I'd vote him up but don't have the rep points). It's generally recommended to set innodb_buffer_pool_size to 70-80% of memory although it's really more complicated than that since you need to account for how much RAM other parts of your system is actually using.
I had a look at this:
http://www.mysqlperformanceblog.com/2009/01/12/should-you-move-from-myisam-to-innodb/
and:
http://www.mysqlperformanceblog.com/2007/11/01/innodb-performance-optimization-basics/
These answer a lot of my questions regarding INNODB vs MyISAM. There is no doubt in my mind that INNODB is the way I should go. However, I am working on my own and for development I have created a LAMP (ubuntu 10.10 x64) VM server. At present the server has 2 GB memory and a single SATA 20GB drive. I can increase both of these amounts without too much trouble to about 3-3.5 GB memory and a 200GB drive.
The reasons I hesitate to switch over to INNODB is:
A) The above articles mention that INNODB will vastly increase the size of the tables, and he recommends much larger amounts of RAM and drive space. While in a production environment I don't mind this increase, in a development environment, I fear I can not accommodate.
B) I don't really see any point in fine tuning the INNODB engine on my VM. This is likely something I will not even be allowed to do in my production environment. The articles make it sound like INNODB is doomed to fail without fine tuning.
My question is this. At what point is INNODB viable? How much RAM would I need to run INNODB on my server (with just my data for testing. This server is not open to anyone but me)? and also is it safe for me to assume that a production environment that will not allow me to fine tune the DB has likely already fine tuned it themselves?
Also, am I overthinking/overworrying about things?
IMHO, it becomes a requirement when you have tens of thousands of rows, or when you can forecast the rate of growth for data.
You need to focus on tuning the innodb buffer pool and the log file size. Also, make sure you have innodb_file_per_table enabled.
To get an idea of how big to make the innodb buffer pool in KB, run this query:
SELECT SUM(data_length+index_length)/power(1024,1) IBPSize_KB
FROM information_schema.tables WHERE engine='InnoDB';
Here it is in MB
SELECT SUM(data_length+index_length)/power(1024,2) IBPSize_MB
FROM information_schema.tables WHERE engine='InnoDB';
Here it is in GB
SELECT SUM(data_length+index_length)/power(1024,3) IBPSize_GB
FROM information_schema.tables WHERE engine='InnoDB';
I wrote articles about this kind of tuning
First Article
Second Article
Third Article
Fourth Article
IF you are limited by the amount of RAM on your server, do not surpass more than 25% of the installed for the sake of the OS.
I think you may be over thinking things. Its true that INNODB loves ram but if your database is small I don't think you'll have many problems. The only issue I have had with MYSQL or any other database is that as the data grows so do the requirements for accessing it quickly. You can also use compression on the tables to keep them smaller but INNODB is vastly better than MYISAM at data integrity.
I also wouldn't worry about tuning your application until you run into a bottleneck. Writing efficient queries and database design seems to be more important than memory unless you're working with very large data sets.
I have a pretty much default installation on mysql on Windows 2003. I am rebuilding some indexes and the process only seems to use 3-20% of the CPU.
Is there a way to allow it to use more and speed up the process?
This applies to every application/process, not only mysql. If your database is using 3-20% CPU and the final performance is still unacceptable it means that you don't lack processor power, since it is most of the time idle. What is most probable is your bottleneck is at your HDD or HDD-controller level. Have you tested the I/O bandwitch and access time of your HD?
Can you mount a ramdisk, and move your database tables to that instead? You'll need lots of RAM, but if your DB is only a few hundred MB, then you'd be skipping the heavy disk IO. Obviously, you'd want to be working from backups in case the power went out...
Also, along the lines of what Fernando mentioned, Try to figure out where you bottleneck is. It's probably the hard disk. Open up Perfmon, and add counters for PhysicalDisk to see if that's where your bottleneck is. From the activity you are doing, it's probably writing to the actual disk that is causing the slow down.