How can I make sure MySQL is using all available memory? - mysql

Dumb question: I have 4 gb of RAM and my dataset is around 500 mb. How can I make sure MySQL/InnoDB is keeping my dataset in RAM?

MySQL Tuning Primer gives you lots info and recommendations regarding your MySQL performance. Keep in mind (and it will warn you), the instance should be running for a period of time to give you accurate feedback.

set the innodb_buffer_pool to 3G - InnoDB will load as much data it can in the buffer pool.

Darhazer is right (I'd vote him up but don't have the rep points). It's generally recommended to set innodb_buffer_pool_size to 70-80% of memory although it's really more complicated than that since you need to account for how much RAM other parts of your system is actually using.

Related

Linux server swappiness

I noticed that our server swap is at Swap Used 98.66% (1,973,240 of 2,000,000). Any tips to reducing this. Just for background I have a
CentOs 6.6 Rack Server
1.92 (24 core) processors
48Gig ram
We do some very heavy database (MySql) work with it, which resides on a 240gig SSD. We also do a lot of file writes for example I had to fix a few things today because we were using 99% of 2Tb main drives, we also have 160gb SSD for writing report files. It's typically at 73% ram usage and 300% cpu usage. So any help would be wonderful. Like I said we do a ton of work with it. For example around 5.2Gb of database traffic an hour.
Opps, thought I put this on serverfault, I don't see a way to move it?
Thanks,
MySQL performs terribly when swapping.
48GB -- what is taking that? Is it mostly MySQL? If so, lets look at how to decrease the caches in MySQL to avoid swapping.
If you are using InnoDB, set innodb_buffer_pool_size to about 70% of available ram. And key_buffer_size to 20M.
If you are using MyISAM; well, don't. (I will elaborate if needed.)
73% RAM sounds like you are not really swapping.
300% CPU sounds like you have some non-MySQL applications that are CPU-bound, or you have some slow queries. If the latter, let's see them; we may be able to improve them.

At what point does MySQL INNODB fine tuning become a requirement?

I had a look at this:
http://www.mysqlperformanceblog.com/2009/01/12/should-you-move-from-myisam-to-innodb/
and:
http://www.mysqlperformanceblog.com/2007/11/01/innodb-performance-optimization-basics/
These answer a lot of my questions regarding INNODB vs MyISAM. There is no doubt in my mind that INNODB is the way I should go. However, I am working on my own and for development I have created a LAMP (ubuntu 10.10 x64) VM server. At present the server has 2 GB memory and a single SATA 20GB drive. I can increase both of these amounts without too much trouble to about 3-3.5 GB memory and a 200GB drive.
The reasons I hesitate to switch over to INNODB is:
A) The above articles mention that INNODB will vastly increase the size of the tables, and he recommends much larger amounts of RAM and drive space. While in a production environment I don't mind this increase, in a development environment, I fear I can not accommodate.
B) I don't really see any point in fine tuning the INNODB engine on my VM. This is likely something I will not even be allowed to do in my production environment. The articles make it sound like INNODB is doomed to fail without fine tuning.
My question is this. At what point is INNODB viable? How much RAM would I need to run INNODB on my server (with just my data for testing. This server is not open to anyone but me)? and also is it safe for me to assume that a production environment that will not allow me to fine tune the DB has likely already fine tuned it themselves?
Also, am I overthinking/overworrying about things?
IMHO, it becomes a requirement when you have tens of thousands of rows, or when you can forecast the rate of growth for data.
You need to focus on tuning the innodb buffer pool and the log file size. Also, make sure you have innodb_file_per_table enabled.
To get an idea of how big to make the innodb buffer pool in KB, run this query:
SELECT SUM(data_length+index_length)/power(1024,1) IBPSize_KB
FROM information_schema.tables WHERE engine='InnoDB';
Here it is in MB
SELECT SUM(data_length+index_length)/power(1024,2) IBPSize_MB
FROM information_schema.tables WHERE engine='InnoDB';
Here it is in GB
SELECT SUM(data_length+index_length)/power(1024,3) IBPSize_GB
FROM information_schema.tables WHERE engine='InnoDB';
I wrote articles about this kind of tuning
First Article
Second Article
Third Article
Fourth Article
IF you are limited by the amount of RAM on your server, do not surpass more than 25% of the installed for the sake of the OS.
I think you may be over thinking things. Its true that INNODB loves ram but if your database is small I don't think you'll have many problems. The only issue I have had with MYSQL or any other database is that as the data grows so do the requirements for accessing it quickly. You can also use compression on the tables to keep them smaller but INNODB is vastly better than MYISAM at data integrity.
I also wouldn't worry about tuning your application until you run into a bottleneck. Writing efficient queries and database design seems to be more important than memory unless you're working with very large data sets.

How do I determine maximum transaction size in MySQL?

Saw the same question posited for PostgreSQL here; wondering if anyone knows (a) the MySQL flavour of the response and (b) which MySQL options I would examine to determine/influence the answer.
I don't need an absolute answer btw, but if I were to propose inserting, say, 200,000 rows of ~2Kb each would you consider that very straightforward, or pushing the limit a bit?
Assume MySQL is running on a well specced Linux box with 4Gb of RAM, shedloads of disk space, and an instance tuned by someone who generally knows what they're doing!
Cheers
Brian
For Innodb the transaction size will be limited by the size of the redo log (ib_logfile*), so if you plan to commit very large transactions make sure you set innodb_log_file_size=256M or more. The drawback is that it will take longer to recover in case of crash.
But for the record Innobase employees recommend keeping you transactions short
There are no transaction limits built inside SQL servers. The limit is the hardware running it, physical RAM, free space on the hard disk.
We run successfully imports of millions of data.

Best storage engine for constantly changing data

I currently have an application that is using 130 MySQL table all with MyISAM storage engine. Every table has multiple queries every second including select/insert/update/delete queries so the data and the indexes are constantly changing.
The problem I am facing is that the hard drive is unable to cope, with waiting times up to 6+ seconds for I/O access with so many read/writes being done by MySQL.
I was thinking of changing to just 1 table and making it memory based. I've never used a memory table for something with so many queries though, so I am wondering if anyone can give me any feedback on whether it would be the right thing to do?
One possibility is that there may be other issues causing performance problems - 6 seconds seems excessive for CRUD operations, even on a complex database. Bear in mind that (back in the day) ArsDigita could handle 30 hits per second on a two-way Sun Ultra 2 (IIRC) with fairly modest disk configuration. A modern low-mid range server with a sensible disk layout and appropriate tuning should be able to cope with quite a substantial workload.
Are you missing an index? - check the query plans of the slow queries for table scans where they shouldn't be.
What is the disk layout on the server? - do you need to upgrade your hardware or fix some disk configuration issues (e.g. not enough disks, logs on the same volume as data).
As the other poster suggests, you might want to use InnoDB on the heavily written tables.
Check the setup for memory usage on the database server. You may want to configure more cache.
Edit: Database logs should live on quiet disks of their own. They use a sequential access pattern with many small sequential writes. Where they share disks with a random access work load like data files the random disk access creates a big system performance bottleneck on the logs. Note that this is write traffic that needs to be completed (i.e. written to physical disk), so caching does not help with this.
I've now changed to a MEMORY table and everything is much better. In fact I now have extra spare resources on the server allowing for further expansion of operations.
Is there a specific reason you aren't using innodb? It may yield better performance due to caching and a different concurrency model. It likely will require more tuning, but may yield much better results.
should-you-move-from-myisam-to-innodb
I think that that your database structure is very wrong and needs to be optimised, has nothing to do with the storage

How to increase mysqld-nt CPU usage

I have a pretty much default installation on mysql on Windows 2003. I am rebuilding some indexes and the process only seems to use 3-20% of the CPU.
Is there a way to allow it to use more and speed up the process?
This applies to every application/process, not only mysql. If your database is using 3-20% CPU and the final performance is still unacceptable it means that you don't lack processor power, since it is most of the time idle. What is most probable is your bottleneck is at your HDD or HDD-controller level. Have you tested the I/O bandwitch and access time of your HD?
Can you mount a ramdisk, and move your database tables to that instead? You'll need lots of RAM, but if your DB is only a few hundred MB, then you'd be skipping the heavy disk IO. Obviously, you'd want to be working from backups in case the power went out...
Also, along the lines of what Fernando mentioned, Try to figure out where you bottleneck is. It's probably the hard disk. Open up Perfmon, and add counters for PhysicalDisk to see if that's where your bottleneck is. From the activity you are doing, it's probably writing to the actual disk that is causing the slow down.