Speed and tuning for mySQL (1billion rows) - mysql

My company has a mySQL server used by a team of analysts (usually 3-4 at a time). Lately the queries have slowed down, with some of them taking even days, for a database with tables up to 1 billion rows (10^9 records).
Server main features: Linux OS-64 GB of memory- 3 Terabytes of hard drive.
We know nothing of fine tuning, so any tool/rule of thumb to find out what is causing the trouble or at least to narrow it down, would be welcome.
Going to Workbench studio>Table inspector I found these key values for the DB that we use the most:
DB size: ~500 Gbytes
Largest table size: ~80 Gbytes
Index length (for largest table): ~230 Gbytes. This index relies on 6 fields.
Almost no MyISAM tables, all InnoDB
Ideally I would like to fine tune the server (better), the DB (worse), or both (in the future), in the simplest possible way, to speed it up.
My questions:
Are these values (500, 80, 230 GB) normal and manageable for a
medium size server?
Is it normal to have indexes of this size -230Gb-, way larger than the table itself?
What parameters/strategy can be tweaked to fix this? I'm thinking memory logs, or buying server RAM, but happy to investigate any sensible answers.
Many thanks.

If you're managing a MySQL instance of this scale, it would be worth your time to read High Performance MySQL which is the best book on MySQL tuning. I strongly recommend you get this book and read it.
Your InnoDB buffer pool is probably still at its default size, not taking advantage of the RAM on your Linux system. It doesn't matter how much RAM you have if you haven't configured MySQL to use it!
There are other important tuning parameters too. MySQL 5.7 Performance Tuning Immediately After Installation is a great introduction to the most important tuning options.
Indexes can be larger than the table itself. The factor of nearly 4 to 1 is unusual, but not necessarily bad. It depends on what indexes you need, and there's no way to know that unless you consider the queries you need to run against this data.
I did a presentation How to Design Indexes, Really a few years ago (it's just as relevant to current versions of MySQL). Here's the video: https://www.youtube.com/watch?v=ELR7-RdU9XU

Here's the order you want to check things:
1) Tune your indexes. Pick a commonly-used slow query and analyze it. Learn about EXPLAIN ANALYZE so that you can tell if your query is using indexes properly. It is entirely possible that your tables are not indexed correctly, and your days-long queries might run in minutes. Literally. Without proper indexes, your queries will be doing full table scans in order to do joins, and with billions of rows, that's going to be very, very slow.
A good introduction to indexes is at http://use-the-index-luke.com/ but there are zillions of books and articles on the topic.
1a) Repeat #1 with other slow queries. See if you can improve them. If you've worked on a number of slow queries and you're not able to speed them up, then proceed to server tuning.
2) Tune your server. Bill Karwin's links will be helpful there.
3) Look at increasing hardware/RAM. This should only be last resort.
Spend time with #1. It is likely to return the best bang for the buck. There is much you can do to improve things without spending a dime. You'll also learn how to write better queries and create better indexes and prevent these problems in the future.
Also: Listen to Bill Karwin and his knowledge. He is an Expert with a capital E.

In a survey of 600 rather random tables (a few were much bigger than yours), your 230GB:80GB ratio would be at about the 99th percentile. Please provide SHOW CREATE TABLE so we can discuss whether you are "doing something wrong", or it is simply an extreme situation. (Rarely is a 6-column index advisable. And if it is a single index adding up to 230GB, something is 'wrong'.)
I've seen bigger tables run fine in smaller machines. If you are doing mostly "point queries", there is virtually no size limitation. If you are using UUIDs, you are screwed. That is, it really depends on the data, the queries, the schema, the phase of the moon, your karma, etc.
A cross-join can easily get to a trillion things to do. A join with eq_ref is often not much slower than a query with no joins.
"You can't tune your way out of a performance problem." "Throwing hardware at a performance problem either wastes money, or delays the inevitable." Instead, let's see the "queries that are slowing down", together with EXPLAIN SELECT ... and SHOW CREATE TABLE.
Is this a Data Warehouse application? Do you have Summary Tables?
Here is my Cookbook on creating indexes . But it might be faster if you show us your code.
And I can provide another Tuning Analysis .
EXPLAIN SELECT ..... is a critical part of information needed to investigate your request for assistance.
SHOW CREATE TABLE for each table involved would also be helpful.
At this point in time, neither are visible in the data available from user......

I will try to answer your question but keep in mind that I am no MySQL expert.
1) It is quite large DB with large table, but nothing fairly sized server couldn't handle. But it really depends on the workload you have.
2) The index size greater than table itself is interesting but it will probably be size of all indexes on that table. In that case it is completely normal.
3) 64 GB of RAM in your server means that there will be probably lot of disk operations going on and it will definitely slow you down. So adding some memory will surely help. Maybe check how the server behaves when the query is running with iotop. And compare it with information from top to see if the server is waiting on disks.

Related

Do the results of a SQL query explain depend on the size of the database?

My application is using JPA with Hibernate and I see that hibernate generates some interesting SQL queries with a lot of joins in my log files. The application does not have a lot of users right now and I am worried that some of the queries being generated by hibernate are going to cause problems when the database grows in size.
I have run some of the sql queries generated by hibernate through the EXPLAIN command to look at the query plans that are generated.
Is the output of EXPLAIN dependent on the size of the database? When my database grows in size will the query planner generate different plans for the same SQL queries?
At what point in the development / deployment cycle should I be looking at SQL query plans for sql queries generated by hibernate? When is the right time to use EXPLAIN.
How can the output of explain be used to determine if a query will become a problem, when the database is so small that every query no matter how complex looking runs in under 0.5 seconds?
I am using Postgres 9.1 as the database for my application but I am interested in the general answer to the above questions.
Actually, #ams you are right in your comment - it is generally pointless to use explain with tiny amounts of data.
If a table only has 10 rows then it's quite likely all in one page and it costs (roughly) the same to read one row as all 10. Going to an index first and then fetching the page will be more expensive than just reading the lot and ignoring what you don't want. PostgreSQL's planner has configured costs for things like index reads, table reads, disk accesses vs cache accesses, sorting etc. It sizes these according to the (approximate) size of the tables and distribution of values within them. What it doesn't do (as of the pending 9.2 release) is account for cross-column or cross-table correlations. It also doesn't offer manual hints that let you override the planner's choices (unlike MS-SQL or Oracle).
Each RDBMS' planner has different strengths and weaknesses but I think it's fair to say that MySQL's is the weakest (particularly in older releases).
So - if you want to know how your system will perform with 100 concurrent users and billions of rows you'll want to generate test data and load for a sizeable fraction of that. Worse, you'll want to have roughly the same distribution of values too. If most clients have about 10 invoices but a few have 1000 then that's something your test data will need to reflect. If you need to maintain performance across multiple RDBMS then repeat testing across all of them.
This is all separate from the overall performance of the system of course, which depends on the size and capabilities of your server vs its required load. A system can cope with a steady increase in load and then suddenly you will see performance drop rapidly as cache sizes are exceeded etc.
HTH
1 Is the output of EXPLAIN dependent on the size of the database? When my database grows in size will the query planner generate
different plans for the same SQL queries?
It all depends on your data and the statistics about the data. Many performance problems occur because lack of statistics, when somebody forgot to ANALYZE or turned auto_vacuum (incl. analyze) off.
2 At what point in the development / deployment cycle should I be looking at SQL query plans for sql queries generated by hibernate?
When is the right time to use EXPLAIN.
Hibernate has a habit of sending lots and lots of queries to the database, even for simple joins. Turn your querylog on, and keep an eye on that one. Later on, you could run an auto-explain on all queries from your log.
3 How can the output of explain be used to determine if a query will become a problem, when the database is so small that every query
no matter how complex looking runs in under 0.5 seconds?
No, because it all depends on the data. When 95% of your users are male, an index on gender won't be used when searching for a man. When you're looking for a woman, the index makes sense and will be used. A functional index on records where gender = female, is even better: It's useless to index something that will never benefit from an index and the index will be much smaller.
The only thing you can do to predict the usage of indexes, is testing with set enable_seqscan = off; that will show that it is possible to use some index, but that's all.

how to increase performance of mysql query if we have more than 1 million records?

In User table i have more than 1 million records so how can i manage using MySQL, Symfony 1.4. Make performance better.
So that it can give quick output.
To significantly improve performance of well designed system all you can do is increase the resources. Typically, these days, the cheapest way to do this is to distribute the task.
For example a slow thing in RDBM system is reading and writing to an from the storage (typically RDBMs systems start as I/O bound, that is, they mostly wait for data to get read or written to storage).
So, to offset, very commonly the RDBMS will allow you to split the table across multiple HDDs, effectively multiplying the I/O performance (approach similar to RAID0).
Adding more hard disks increases the performance. This goes on up to maximum I/O that your system could support (either simply because the system can not push more data through circuits or because it does need to crunch the numbers a bit when it fetches them so it becomes CPU bound; optimally you would be utilising both)
After that you have to start multiplying the systems distributing the data across database nodes. For this to work either RDBMS must support it or there should be application layer that will coordinate distributing the tasks and merging the results, but normally things would still scale.
I would say that with 512 systems you could have all trillion records effectively cached (10^12) and achieve relatively nice performance. But really you should specify what kind of performance you are looking for - there is a difference between full text searches on terra-records and running mostly simple fetches and updates. Also, for certain work 500ms (or even more) is considered good performance and then for other work it would be horrible.
at first: theres a big difference between 1 trillion and 1 million.
to your performance problems: show us the query thats running slow, without seeing it, it's hard to tell whats wrong with it. what you could try:
use EXPLAIN to get more information about your slow querys, see if they're using your indexes or if not (and if not, why not?)
use correct and reasonable indexes

What techniques are most effective for dealing with millions of records?

I once had a MySQL database table containing 25 million records, which made even a simple COUNT(*) query takes minute to execute. I ended up making partitions, separating them into a couple tables. What i'm asking is, is there any pattern or design techniques to handle this kind of problem (huge number of records)? Is MSSQL or Oracle better in handling lots of records?
P.S
the COUNT(*) problem stated above is just an example case, in reality the app does crud functionality and some aggregate query (for reporting), but nothing really complicated. It's just that it takes quite a while (minutes) to execute some these queries because of the table volume
See Why MySQL could be slow with large tables and COUNT(*) vs COUNT(col)
Make sure you have an index on the column you're counting. If your server has plenty of RAM, consider increasing MySQL's buffer size. Make sure your disks are configured correctly -- DMA enabled, not sharing a drive or cable with the swap partition, etc.
What you're asking with "SELECT COUNT(*)" is not easy.
In MySQL, the MyISAM non-transactional engine optimises this by keeping a record count, so SELECT COUNT(*) will be very quick.
However, if you're using a transactional engine, SELECT COUNT(*) is basically saying:
Exactly how many records exist in this table in my transaction ?
To do this, the engine needs to scan the entire table; it probably knows roughly how many records exist in the table already, but to get an exact answer for a particular transaction, it needs a scan. This isn't going to be fast using MySQL innodb, it's not going to be fast in Oracle, or anything else. The whole table MUST be read (excluding things stored separately by the engine, such as BLOBs)
Having the whole table in ram will make it a bit faster, but it's still not going to be fast.
If your application relies on frequent, accurate counts, you may want to make a summary table which is updated by a trigger or some other means.
If your application relies on frequent, less accurate counts, you could maintain summary data with a scheduled task (which may impact performance of other operations less).
Many performance issues around large tables relate to indexing problems, or lack of indexing all together. I'd definitely make sure you are familiar with indexing techniques and the specifics of the database you plan to use.
With regards to your slow count(*) on the huge table, i would assume you were using the InnoDB table type in MySQL. I have some tables with over 100 million records using MyISAM under MySQL and the count(*) is very quick.
With regards to MySQL in particular, there are even slight indexing differences between InnoDB and MyISAM tables which are the two most commonly used table types. It's worth understanding the pros and cons of each and how to use them.
What kind of access to the data do you need? I've used HBase (based on Google's BigTable) loaded with a vast amount of data (~30 million rows) as the backend for an application which could return results within a matter of seconds. However, it's not really appropriate if you need "real time" access - i.e. to power a website. Its column-oriented nature is also a fairly radical change if you're used to row-oriented DBMS.
Is count(*) on the whole table actually something you do a lot?
InnoDB will have to do a full table scan to count the rows, which is obviously a major performance issue if counting all of them is something you actually want to do. But that doesn't mean that other operations on the table will be slow.
With the right indexes, MySQL will be very fast at retrieving data from tables much bigger than that. The problem with indexes is that they can hurt insert speeds, particularly for large tables as insert performance drops dramatically once the space required for the index reaches a certain threshold - presumably the size it will keep in memory. But if you only need modest insert speeds, MySQL should do everything you need.
Any other database will have similar tradeoffs between retrieve speed and insert speed; they may or may not be better for your application. But I would look first at getting the indexes right, and maybe rewriting your queries, before you try other databases. For what it's worth, we picked MySQL originally because we found it performed best.
Note that MyISAM tables in MySQL store the total size of the table. They maintain this because it's useful to the optimiser in some cases, but a side effect is that count(*) on the whole table is really fast. That doesn't necessarily mean they're faster than InnoDB at anything else.
I answered a similar question in This Stackoverflow Posting in some detail, describing the merits of the architectures of both systems. To some extent it was done from a data warehousing point of view but many of the differences also matter on transactional systems.
However, 25 million rows is not a VLDB and if you are having performance problems you should look to indexing and tuning. You don't need to go to Oracle to support a 25 million row database - you've got about 3 orders of magnitude to go before you're truly in VLDB territory.
You are asking for a books worth of answer and I therefore propose you get a good book on databases. There are many.
To get you started, here are some database basics:
First, you need a great data model based not just on what data you need to store but on usage patterns. Good database performance starts with good schema design.
Second, place indicies on columns based upon expected lookup AND update needs as update performance is often overlooked.
Third, don't put functions in where clauses if at all possible.
Fourth, use an -ahem- RDBMS engine that is of quality design. I would respectfully submit that while it has improved greatly in the recent past, mysql does not qualify. (Apologies to those who wish to argue it has finally made the grade in recent times.) There is no longer any need to choose between high-price and quality; Postgres (aka PostgreSql) is available open-source and is truly fantastic - and has all the plug-ins available to meet your needs.
Finally, learn what you are asking a database engine to do - gain some insight into internals - so you can better judge what kinds of things are expensive and why.
I'm going to second #Mark Baker, and say that you need to build indices on your tables.
For other queries than the one you selected, you should also be aware that using constructs such as IN() is faster than a series of OR statements in the query. There are lots of little steps you can take to speed-up individual queries.
Indexing is key to performance with this number of records, but how you write the queries can make a big difference as well. Specific performance tuning methods vary by database, but in general, avoid returning more records or fields than you actually need, make sure all join fields are indexed (as well as common where clause fields), avoid cursors (although I think this is less true in Oracle than SQL Server I don't know about mySQL).
Hardware can also be a bottleneck especially if you are running things besides the database server on the same machine.
Performance tuning is a very technical subject and can't really be answered well in a format like this. I suggest you get a performance tuning book and read it. Here is a link to one for mySQL
http://www.amazon.com/High-Performance-MySQL-Optimization-Replication/dp/0596101716

MySQL: Advisable number of rows

Consider an indexed MySQL table with 7 columns, being constantly queried and written to. What is the advisable number of rows that this table should be allowed to contain before the performance would be improved by splitting the data off into other tables?
Whether or not you would get a performance gain by partitioning the data depends on the data and the queries you will run on it. You can store many millions of rows in a table and with good indexes and well-designed queries it will still be super-fast. Only consider partitioning if you are already confident that your indexes and queries are as good as they can be, as it can be more trouble than its worth.
There's no magic number, but there's a few things that affect performance in particular:
Index Cardinality: don't bother indexing a row that has 2 or 3 values (like an ENUM). On a large table, the query optimizer will ignore these.
There's a trade off between writes and indexes. The more indexes you have, the longer writes take. Don't just index every column. Analyze your queries and see which columns need to be indexed for your app.
Disk IO and a memory play an important role. If you can fit your whole table into memory, you take disk IO out of the equation (once the table is cached, anyway). My guess is that you'll see a big performance change when your table is too big to buffer in memory.
Consider partitioning your servers based on use. If your transactional system is reading/writing single rows, you can probably buy yourself some time by replicating the data to a read only server for aggregate reporting.
As you probably know, table performance changes based on the data size. Keep an eye on your table/queries. You'll know when it's time for a change.
MySQL 5 has partitioning built in and is very nice. What's nice is you can define how your table should be split up. For instance, if you query mostly based on a userid you can partition your tables based on userid, or if you're querying by dates do it by date. What's nice about this is that MySQL will know exactly which partition table to search through to find your values. The downside is if you're search on a field that isn't defining your partition its going to scan through each table, which could possibly decrease performance.
While after the fact you could point to the table size at which performance became a problem, I don't think you can predict it, and certainly not from the information given on a web site such as this!
Some questions you might usefully ask yourself:
Is performance currently acceptable?
How is performance measured - is
there a metric?
How do we recognise
unacceptable performance?
Do we
measure performance in any way that
might allow us to forecast a
problem?
Are all our queries using
an efficient index?
Have we simulated extreme loads and volumes on the system?
Using the MyISAM engine, you'll run into a 2GB hard limit on table size unless you change the default.
Don't ever apply an optimisation if you don't think it's needed. Ideally this should be determined by testing (as others have alluded).
Horizontal or vertical partitioning can improve performance but also complicate you application. Don't do it unless you're sure that you need it AND it will definitely help.
The 2G data MyISAM file size is only a default and can be changed at table creation time (or later by an ALTER, but it needs to rebuild the table). It doesn't apply to other engines (e.g. InnoDB).
Actually this is a good question for performance. Have you read Jay Pipes? There isn't a specific number of rows but there is a specific page size for reads and there can be good reasons for vertical partitioning.
Check out his kung fu presentation and have a look through his posts. I'm sure you'll find that he's written some useful advice on this.
Are you using MyISAM? Are you planning to store more than a couple of gigabytes? Watch out for MAX_ROWS and AVG_ROW_LENGTH.
Jeremy Zawodny has an excellent write-up on how to solve this problem.

How big can a MySQL database get before performance starts to degrade

At what point does a MySQL database start to lose performance?
Does physical database size matter?
Do number of records matter?
Is any performance degradation linear or exponential?
I have what I believe to be a large database, with roughly 15M records which take up almost 2GB. Based on these numbers, is there any incentive for me to clean the data out, or am I safe to allow it to continue scaling for a few more years?
The physical database size doesn't matter. The number of records don't matter.
In my experience the biggest problem that you are going to run in to is not size, but the number of queries you can handle at a time. Most likely you are going to have to move to a master/slave configuration so that the read queries can run against the slaves and the write queries run against the master. However if you are not ready for this yet, you can always tweak your indexes for the queries you are running to speed up the response times. Also there is a lot of tweaking you can do to the network stack and kernel in Linux that will help.
I have had mine get up to 10GB, with only a moderate number of connections and it handled the requests just fine.
I would focus first on your indexes, then have a server admin look at your OS, and if all that doesn't help it might be time to implement a master/slave configuration.
In general this is a very subtle issue and not trivial whatsoever. I encourage you to read mysqlperformanceblog.com and High Performance MySQL. I really think there is no general answer for this.
I'm working on a project which has a MySQL database with almost 1TB of data. The most important scalability factor is RAM. If the indexes of your tables fit into memory and your queries are highly optimized, you can serve a reasonable amount of requests with a average machine.
The number of records do matter, depending of how your tables look like. It's a difference to have a lot of varchar fields or only a couple of ints or longs.
The physical size of the database matters as well: think of backups, for instance. Depending on your engine, your physical db files on grow, but don't shrink, for instance with innodb. So deleting a lot of rows, doesn't help to shrink your physical files.
There's a lot to this issues and as in a lot of cases the devil is in the details.
The database size does matter. If you have more than one table with more than a million records, then performance starts indeed to degrade. The number of records does of course affect the performance: MySQL can be slow with large tables. If you hit one million records you will get performance problems if the indices are not set right (for example no indices for fields in "WHERE statements" or "ON conditions" in joins). If you hit 10 million records, you will start to get performance problems even if you have all your indices right. Hardware upgrades - adding more memory and more processor power, especially memory - often help to reduce the most severe problems by increasing the performance again, at least to a certain degree. For example 37 signals went from 32 GB RAM to 128GB of RAM for the Basecamp database server.
I'm currently managing a MySQL database on Amazon's cloud infrastructure that has grown to 160 GB. Query performance is fine. What has become a nightmare is backups, restores, adding slaves, or anything else that deals with the whole dataset, or even DDL on large tables. Getting a clean import of a dump file has become problematic. In order to make the process stable enough to automate, various choices needed to be made to prioritize stability over performance. If we ever had to recover from a disaster using a SQL backup, we'd be down for days.
Horizontally scaling SQL is also pretty painful, and in most cases leads to using it in ways you probably did not intend when you chose to put your data in SQL in the first place. Shards, read slaves, multi-master, et al, they are all really shitty solutions that add complexity to everything you ever do with the DB, and not one of them solves the problem; only mitigates it in some ways. I would strongly suggest looking at moving some of your data out of MySQL (or really any SQL) when you start approaching a dataset of a size where these types of things become an issue.
Update: a few years later, and our dataset has grown to about 800 GiB. In addition, we have a single table which is 200+ GiB and a few others in the 50-100 GiB range. Everything I said before holds. It still performs just fine, but the problems of running full dataset operations have become worse.
I would focus first on your indexes, than have a server admin look at your OS, and if all that doesn't help it might be time for a master/slave configuration.
That's true. Another thing that usually works is to just reduce the quantity of data that's repeatedly worked with. If you have "old data" and "new data" and 99% of your queries work with new data, just move all the old data to another table - and don't look at it ;)
-> Have a look at partitioning.
2GB and about 15M records is a very small database - I've run much bigger ones on a pentium III(!) and everything has still run pretty fast.. If yours is slow it is a database/application design problem, not a mysql one.
It's kind of pointless to talk about "database performance", "query performance" is a better term here. And the answer is: it depends on the query, data that it operates on, indexes, hardware, etc. You can get an idea of how many rows are going to be scanned and what indexes are going to be used with EXPLAIN syntax.
2GB does not really count as a "large" database - it's more of a medium size.
I once was called upon to look at a mysql that had "stopped working". I discovered that the DB files were residing on a Network Appliance filer mounted with NFS2 and with a maximum file size of 2GB. And sure enough, the table that had stopped accepting transactions was exactly 2GB on disk. But with regards to the performance curve I'm told that it was working like a champ right up until it didn't work at all! This experience always serves for me as a nice reminder that there're always dimensions above and below the one you naturally suspect.
Also watch out for complex joins. Transaction complexity can be a big factor in addition to transaction volume.
Refactoring heavy queries sometimes offers a big performance boost.
A point to consider is also the purpose of the system and the data in the day to day.
For example, for a system with GPS monitoring of cars is not relevant query data from the positions of the car in previous months.
Therefore the data can be passed to other historical tables for possible consultation and reduce the execution times of the day to day queries.
Performance can degrade in a matter of few thousand rows if database is not designed properly.
If you have proper indexes, use proper engines (don't use MyISAM where multiple DMLs are expected), use partitioning, allocate correct memory depending on the use and of course have good server configuration, MySQL can handle data even in terabytes!
There are always ways to improve the database performance.
It depends on your query and validation.
For example, i worked with a table of 100 000 drugs which has a column generic name where it has more than 15 characters for each drug in that table .I put a query to compare the generic name of drugs between two tables.The query takes more minutes to run.The Same,if you compare the drugs using the drug index,using an id column (as said above), it takes only few seconds.
Database size DOES matter in terms of bytes and table's rows number. You will notice a huge performance difference between a light database and a blob filled one. Once my application got stuck because I put binary images inside fields instead of keeping images in files on the disk and putting only file names in database. Iterating a large number of rows on the other hand is not for free.
No it doesnt really matter. The MySQL speed is about 7 Million rows per second. So you can scale it quite a bit
Query performance mainly depends on the number of records it needs to scan, indexes plays a high role in it and index data size is proportional to number of rows and number of indexes.
Queries with indexed field conditions along with full value would be returned in 1ms generally, but starts_with, IN, Between, obviously contains conditions might take more time with more records to scan.
Also you will face lot of maintenance issues with DDL, like ALTER, DROP will be slow and difficult with more live traffic even for adding a index or new columns.
Generally its advisable to cluster the Database into as many clusters as required (500GB would be a general benchmark, as said by others it depends on many factors and can vary based on use cases) that way it gives better isolation and gives independence to scale specific clusters (more suited in case of B2B)