When configuring the innodb_buffer_pool_size should the value be different if you have a lot of small websites vs. a single one? - mysql

The general rule of thumb I've observed is configuring this property to use 70% of available RAM for dedicated SQL servers with over 4GB RAM. However I'm working on what basically amounts to a shared hosting environment experiencing a ton of traffic lately and I want to optimize this - these are dedicated mySQL servers, but have databases for 200-1000 different sites. Should I still configure using this rule?

You may have many tables and schemas in your MySQL instance, but a single buffer pool is used for all of them. It makes no difference what they're used for — one website or many websites or some database that is not for a website at all. Basically everything that is stored in a page of an InnoDB tablespace on that MySQL instance must be loaded into the buffer pool before it can be read or updated.
The recommendation of 70% of available RAM is not a magic number.
For example, it assumes you have a lot more data on storage than can fit in RAM. If you had 100GB of RAM and 2GB of data on storage, it would be unnecessary overkill to make a 70GB buffer pool. The pages from storage will only be copied into the buffer pool once, therefore for 2GB of data, your 70GB buffer pool would be mostly empty.
It also assumes that the remaining 30% of RAM is enough to support your operating system and other processes besides MySQL.
70% is just a starting suggestion. You need to understand your memory needs to size it properly.

Related

Could MySQL innoDB pages get fragmented inside hard disk? Or InnoDB prevents that from happening to speed up queries?

By pages i mean:
https://dev.mysql.com/doc/internals/en/innodb-page-structure.html
Could these 16KB MySQL pages get fragmented inside memory or disk? meaning if we take a disk image or memory image, could there be a chance that these 16KB pages be fragmented? Meaning if I take an Image of the MySQL folder, will the 16KB pages be in continuous blocks or some of them could get fragmented?
or MySQL implement them in a way that they don't get fragmented? if so, how?
This is more of a filesystem or operating system question than MySQL or InnoDB. InnoDB is writing to a file, and it is indeed possible that the filesystem fragments the writes to a single 16 KiB InnoDB page so that it is not contiguous on the physical disk. However, with modern servers and SSD storage, this basically happens 100% of the time anyway on the underlying media.
It is generally not a concern for me, at least.
(I already discussed disk a length in your similar question. I'll address RAM here.)
Perhaps two decades ago, more than half the CPUs had moved to "virtual" addresses being distinct from "physical" addresses. To achieve this, the hardware guys implemented a "translation" mechanism that mapped a 32-bit (or 64-bit) virtual address that the program uses into a physical address for accessing RAM. This was done at a relatively low level in the hardware. Along with the hardware came the handling of the exception when the physical address is swapped to disk or had not yet been allocated for the user program.
The manufacturers mostly settled on 4KB page size for the precision of the translation. That is the bottom 12 bits (2^12=4K) are fed through the translation untouched; the top bits are mapped (via an on-chip lookup table, backed by an array in RAM), from virtual to physical. This is done for every memory access (except maybe during booting).
With that mechanism, user programs can be totally oblivious of where the 4KB pages are in RAM, and also whether they are scattered or not.
Bottom line: Forget about the memory image. Fragmentation is really a non-issue.
On the other hand, swapping can be an issue. I think that MySQL and InnoDB are designed with the assumption that everything in RAM stays in RAM. Notice how they go to the effort to cache data blocks, index blocks, table definitions, etc. So, do not tune MySQL such that the system needs to swap.

Is it possible to set Buffer pool extension on sql server, having 4 clustered Column Store index in SQL Server 2014?

I created Clustered Column Store index on a table. I want to improve memory performance using buffer pool. Is it possible to set buffer pool on sql server, a table having clustered column Store index in SQL Server 2014? What will be the performance gain after setting buffer on it.
This is one of those "it depends" questions. The answer depends on a lot of things, such as how much RAM you already have in your system and your workload.
Real RAM is always going to give you better performance than the buffer pool extension, if you're running standard edition you can now use 128Gb with SQL server and filling your server with more RAM will give you much better performance improvements than using the buffer pool extensions.
You may also find that for your workload, using the SSDs for your data files rather than the buffer pool gives better results (especially for read heavy workloads with low cache hit ratios so it has to read a lot from disk)
Here is a link to a test done on buffer pool extensions by the Brent Ozar team:
SQL Server 2014 Buffer Pool Extensions
You may get different results with your workload and hardware, so you really need to test it yourself to get an accurate idea of how much it will help.
It is pretty much a given though that it won't help as much as adding more RAM.

WordPress database performance: Percona server vs MySQL w/o InnoDB

I don't want to ask a subjective "which DBMS is best?" or "which DBMS of these two is better?". This doesn't have to be a fanboy debate.
Rather, I welcome any benchmark test results or specific experiences, when it comes to one specific criteria - performance - especially with respect to one particular application: WordPress.
I understand that WordPress doesn't use InnoDB, and so disabling InnoDB in MySQL can speed things up. On the other hand, Percona is a MySQL fork that replaces InnoDB with XtraDB and also claims to be highly efficient, high-performance.
How does each stack up on performance when it comes to running WordPress? (no need for competition...both might come out looking very well, for all I know)
I have tried searching generally on Google, but haven't come across so much as an intelligent discussion, let alone performance benchmark tests.
Would greatly appreciate if any of the experts here could share their experiences. Many thanks!
And please keep any smug, snide comments like "why don't YOU try" to yourself. If I could, I would. And the purpose of Stack Overflow is to share expertise and learn from each-other, not to do everything yourself.
This question is less "MySQL vs. Percona Server" than "MyISAM vs. InnoDB/XtraDB". They both have their own performance characteristics and which storage engine is right for you largely depends on your workload. Most Wordpress sites are low traffic and read-mostly, so as long as your data fits into your buffer pool (for InnoDB/XtraDB) or key cache (for MyISAM), I would expect not-too-dissimilar performance.
Having done a lot of work on Wordpress database optimization, I can tell you that the performance of your Wordpress site depends more upon the class of your hardware and your chosen plugins.
You should use a Caching plugin so that you can just avoid a ton of database read requests
You should avoid plugins that issue expensive queries (sadly, this covers most plugins)
You should prune your comments (usually comments are 99+% SPAM so the ones that are marked as spam are just sitting in your database taking up space)
Your host should have enough RAM for the hot dataset to fit in memory
If you really want to go into detail about MyISAM vs. InnoDB/XtraDB, you can check out the following links:
http://www.mysqlperformanceblog.com/2009/01/12/should-you-move-from-myisam-to-innodb/
http://www.rackspace.com/knowledge_center/article/mysql-engines-myisam-vs-innodb
So, to make a long answer even longer, you'll need to profile your MySQL instance after you can generate production traffic. I know you said you couldn't, but ... this question is kind of like me asking "What haircut would look best on me", without including a picture.
Wordpress can use InnoDB (or XtraDB) just fine. I have done consulting and training for sites that host WordPress at scale, using any of MyISAM, InnoDB, and XtraDB.
WordPress 3.5.1 creates tables without specifying the storage engine. So it honors the default storage engine on whatever instance of MySQL you're using. As of MySQL 5.5 (ca. December 2010), the default storage engine is InnoDB. I tested installing WordPress on a virtual host running MySQL 5.6.10, and it created tables using the InnoDB storage engine.
I don't have any benchmarks to share, but those would be of limited use anyway, because performance depends so much on the given hardware, the traffic load, and other factors.
A CMS like WordPress tends to be heavily weighted toward read-only queries. This is where InnoDB should give good benefit, because it caches data pages as well as indexes. MyISAM only caches indexes, and relies on the filesystem cache to hold data.
So the key to making WordPress perform well is to allocate enough innodb_buffer_pool_size to hold the data and indexes for all your tables. The data size of a WordPress site (even one with hundreds of articles) isn't typically very large, so you probably only need a few GB of buffer pool to hold all frequently-requested data in the buffer. Once the data and index pages have populated the InnoDB buffer pool, 99.9% of your queries will be served out of RAM, and the site will have great performance.
As with any caching system, the real killer to performance is when your "hot" data is larger than the cache, forcing queries to incur disk I/O. A single disk I/O is worth a few thousand RAM accesses, so you want to serve content completely out of RAM as much as possible.
The improvements in XtraDB are designed to help as the number of Threads_running gets higher, or the buffer pool gets larger (e.g. dozens of GB). It's unlikely that a single WP site will exercise either MySQL or Percona Server so heavily that these improvements will offer more than a slight advantage. Unless you're going to host hundreds of WP sites on a given server like a hosting company.
You may even find that the bottleneck ceases to be the database, and then you need to focus on front-end optimizations.

At what point does MySQL INNODB fine tuning become a requirement?

I had a look at this:
http://www.mysqlperformanceblog.com/2009/01/12/should-you-move-from-myisam-to-innodb/
and:
http://www.mysqlperformanceblog.com/2007/11/01/innodb-performance-optimization-basics/
These answer a lot of my questions regarding INNODB vs MyISAM. There is no doubt in my mind that INNODB is the way I should go. However, I am working on my own and for development I have created a LAMP (ubuntu 10.10 x64) VM server. At present the server has 2 GB memory and a single SATA 20GB drive. I can increase both of these amounts without too much trouble to about 3-3.5 GB memory and a 200GB drive.
The reasons I hesitate to switch over to INNODB is:
A) The above articles mention that INNODB will vastly increase the size of the tables, and he recommends much larger amounts of RAM and drive space. While in a production environment I don't mind this increase, in a development environment, I fear I can not accommodate.
B) I don't really see any point in fine tuning the INNODB engine on my VM. This is likely something I will not even be allowed to do in my production environment. The articles make it sound like INNODB is doomed to fail without fine tuning.
My question is this. At what point is INNODB viable? How much RAM would I need to run INNODB on my server (with just my data for testing. This server is not open to anyone but me)? and also is it safe for me to assume that a production environment that will not allow me to fine tune the DB has likely already fine tuned it themselves?
Also, am I overthinking/overworrying about things?
IMHO, it becomes a requirement when you have tens of thousands of rows, or when you can forecast the rate of growth for data.
You need to focus on tuning the innodb buffer pool and the log file size. Also, make sure you have innodb_file_per_table enabled.
To get an idea of how big to make the innodb buffer pool in KB, run this query:
SELECT SUM(data_length+index_length)/power(1024,1) IBPSize_KB
FROM information_schema.tables WHERE engine='InnoDB';
Here it is in MB
SELECT SUM(data_length+index_length)/power(1024,2) IBPSize_MB
FROM information_schema.tables WHERE engine='InnoDB';
Here it is in GB
SELECT SUM(data_length+index_length)/power(1024,3) IBPSize_GB
FROM information_schema.tables WHERE engine='InnoDB';
I wrote articles about this kind of tuning
First Article
Second Article
Third Article
Fourth Article
IF you are limited by the amount of RAM on your server, do not surpass more than 25% of the installed for the sake of the OS.
I think you may be over thinking things. Its true that INNODB loves ram but if your database is small I don't think you'll have many problems. The only issue I have had with MYSQL or any other database is that as the data grows so do the requirements for accessing it quickly. You can also use compression on the tables to keep them smaller but INNODB is vastly better than MYISAM at data integrity.
I also wouldn't worry about tuning your application until you run into a bottleneck. Writing efficient queries and database design seems to be more important than memory unless you're working with very large data sets.

Is it possible to load a database in the RAM?

I want to load a MYSQL-database into the RAM of my computer, is there a way to do this? I am running this database under Linux. Also, if this is possible, is there a good way to make backups, because if the computer is unexpectedly shut down, I would lose all my data.
If you buffer pool is big enough, you data is -- effectively -- an in-memory database with a disk backup copy. Don't fool around with RAM databases, simply make the buffer pool size as large as you can make it.
Read this:
http://dev.mysql.com/doc/refman/5.1/en/innodb-parameters.html#sysvar_innodb_buffer_pool_size
Yes, you can use the MEMORY engine. As far as backups, it's your call. You never said, e.g. how often you want to store to disk. But you can use traditional MySQL replication or your own solution.
absolutely, for example under linux you can mount your database in a tmpfs
If you're using innodb tables then I recommend adjusting the buffer pool size like S.Lott suggested above. Make it 110% or so of your database size if you have the ram.
If your database is > 50mb you'll also want to look at increasing the innodb_log_file_size. See http://dev.mysql.com/doc/refman/5.1/en/innodb-parameters.html#sysvar_innodb_log_file_size Perhaps to around 25 - 50% of your buffer pool size, but 1gb max.
The innodb_log_file_size is a bit tricky to adjust. You need to shut the db down, move the current log files into a backup location, and let mysql recreate them when it's restarted (i.e. after you've changed the values in my.cnf). Google it and you'll find some answers.