I am wondering what is more efficient and faster in performance:
Having an index on one big table or multiple smaller tables without indexes?
Since this is a pretty abstract problem let me make it more practical:
I have one table with statistics about users (20,000 users and about 30 million rows overall). The table has about 10 columns including the user_id, actions, timestamps, etc.
Most common applications are: Inserting data by user_id and retrieving data by user_id (SELECT statements never include multiple user_id's).
Now so far I have an INDEX on the user_id and the query looks something like this
SELECT * FROM statistics WHERE user_id = 1
Now, with more and more rows the table gets slower and slower. INSERT statements slow down because the INDEX gets bigger and bigger; SELECT statements slow down, well, because there are more rows to search through.
Now I was wondering why not have one statistics table for each user and change the query syntax to something like this instead:
SELECT * FROM statistics_1
where 1 represents the user_id obviously.
This way, no INDEX is needed and there is far less data in each table, so INSERT and SELECT statements should be much faster.
Now my questions again:
Are there any real world disadvantages to handle so many tables (in my case 20,000) instead of using of using one table with an INDEX?
Would my approach actually speed things up or might the lookup for the table eventually slow down things more than everything?
Creating 20,000 tables is a bad idea. You'll need 40,000 tables before long, and then more.
I called this syndrome Metadata Tribbles in my book SQL Antipatterns Volume 1. You see this happen every time you plan to create a "table per X" or a "column per X".
This does cause real performance problems when you have tens of thousands of tables. Each table requires MySQL to maintain internal data structures, file descriptors, a data dictionary, etc.
There are also practical operational consequences. Do you really want to create a system that requires you to create a new table every time a new user signs up?
Instead, I'd recommend you use MySQL Partitioning.
Here's an example of partitioning the table:
CREATE TABLE statistics (
id INT AUTO_INCREMENT NOT NULL,
user_id INT NOT NULL,
PRIMARY KEY (id, user_id)
) PARTITION BY HASH(user_id) PARTITIONS 101;
This gives you the benefit of defining one logical table, while also dividing the table into many physical tables for faster access when you query for a specific value of the partition key.
For example, When you run a query like your example, MySQL accesses only the correct partition containing the specific user_id:
mysql> EXPLAIN PARTITIONS SELECT * FROM statistics WHERE user_id = 1\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: statistics
partitions: p1 <--- this shows it touches only one partition
type: index
possible_keys: NULL
key: PRIMARY
key_len: 8
ref: NULL
rows: 2
Extra: Using where; Using index
The HASH method of partitioning means that the rows are placed in a partition by a modulus of the integer partition key. This does mean that many user_id's map to the same partition, but each partition would have only 1/Nth as many rows on average (where N is the number of partitions). And you define the table with a constant number of partitions, so you don't have to expand it every time you get a new user.
You can choose any number of partitions up to 1024 (or 8192 in MySQL 5.6), but some people have reported performance problems when they go that high.
It is recommended to use a prime number of partitions. In case your user_id values follow a pattern (like using only even numbers), using a prime number of partitions helps distribute the data more evenly.
Re your questions in comment:
How could I determine a resonable number of partitions?
For HASH partitioning, if you use 101 partitions like I show in the example above, then any given partition has about 1% of your rows on average. You said your statistics table has 30 million rows, so if you use this partitioning, you would have only 300k rows per partition. That is much easier for MySQL to read through. You can (and should) use indexes as well -- each partition will have its own index, and it will be only 1% as large as the index on the whole unpartitioned table would be.
So the answer to how can you determine a reasonable number of partitions is: how big is your whole table, and how big do you want the partitions to be on average?
Shouldn't the amount of partitions grow over time? If so: How can I automate that?
The number of partitions doesn't necessarily need to grow if you use HASH partitioning. Eventually you may have 30 billion rows total, but I have found that when your data volume grows by orders of magnitude, that demands a new architecture anyway. If your data grow that large, you probably need sharding over multiple servers as well as partitioning into multiple tables.
That said, you can re-partition a table with ALTER TABLE:
ALTER TABLE statistics PARTITION BY HASH(user_id) PARTITIONS 401;
This has to restructure the table (like most ALTER TABLE changes), so expect it to take a while.
You may want to monitor the size of data and indexes in partitions:
SELECT table_schema, table_name, table_rows, data_length, index_length
FROM INFORMATION_SCHEMA.PARTITIONS
WHERE partition_method IS NOT NULL;
Like with any table, you want the total size of active indexes to fit in your buffer pool, because if MySQL has to swap parts of indexes in and out of the buffer pool during SELECT queries, performance suffers.
If you use RANGE or LIST partitioning, then adding, dropping, merging, and splitting partitions is much more common. See http://dev.mysql.com/doc/refman/5.6/en/partitioning-management-range-list.html
I encourage you to read the manual section on partitioning, and also check out this nice presentation: Boost Performance With MySQL 5.1 Partitions.
It probably depends on the type of queries you plan on making often, and the best way to know for sure is to just implement a prototype of both and do some performance tests.
With that said, I would expect that a single (large) table with an index will do better overall because most DBMS systems are heavily optimized to deal with the exact situation of finding and inserting data into large tables. If you try to make many little tables in hopes of improving performance, you're kindof fighting the optimizer (which is usually better).
Also, keep in mind that one table is probably more practical for the future. What if you want to get some aggregate statistics over all users? Having 20 000 tables would make this very hard and inefficient to execute. It's worth considering the flexibility of these schemas as well. If you partition your tables like that, you might be designing yourself into a corner for the future.
Concrete example:
I have one table with statistics about users (20,000 users and about 30 million rows overall). The table has about 10 columns including the user_id, actions, timestamps, etc.
Most common applications are: Inserting data by user_id and retrieving data by user_id (SELECT statements never include multiple user_id's).
Do this:
id INT UNSIGNED NOT NULL AUTO_INCREMENT,
...
PRIMARY KEY(user_id, id),
INDEX(id)
Having user_id at the start of the PK gives you "locality of reference". That is, all the rows for one user are clustered together thereby minimizing I/O.
The id on the end of the PK is because the PK must be unique.
The strange-looking INDEX(id) is to keep AUTO_INCREMENT happy.
Abstract question:
Never have multiple identical tables.
Use PARTITIONing only if it meets one of the use-cases listed in http://mysql.rjweb.org/doc.php/partitionmaint
A PARTITIONed table needs a different set of indexes than the non-partitioned equivalent table.
In most cases a single, non-partitioned, table is optimal.
Use the queries to design indexes.
There is little to add to Bill Karwins answer. But one hint is: check if all the data for the user is needed in complete detail over all the time.
If you want to give usage statistics or number of visits or those things, you usually will get not a granularity of single actions and seconds for, say, the year 2009 from todays view. So you could build aggregation tables and a archive-table (not engine archive, of course) to have the recent data on action- base and an overview over the older actions.
Old actions don't change, I think.
And you still can go into detail from the aggregation with a week_id in the archive-table for example.
Intead of going from 1 table to 1 table per user, you can use partitioning to hit a number of tables/table size ratio somewhere in the middle.
You can also keep stats on users to try to move 'active' users into 1 table to reduce the number of tables that you have to access over time.
The bottom line is that there is a lot you can do, but largely you have to build prototypes and tests and just evaluate the performance impacts of various changes you are making.
Related
I'm using a MySQL database and have to perform some select queries on large/huge tables (e.g. 267,736 rows and 30 columns).
Query details:
Only select queries (the data in the table is fixed, never an update, insert or delete)
Select query on all the columns (business requirement)
Mostly limit the number of rows (LIMIT 10 to all rows -> user can choose)
Could be ordered by one or multiple columns (creation of indexes here will not help since the user can order by any column he likes)
Could be filtered by a value the user chooses (where filter on one or more columns)
Currently the queries take up to 2 seconds, which is to long.
Is there a way to speed them up?
Which storage engine should I use: InnoDB/MyISAM/...
Should I have a primary key, even if I will never use him?
...?
You should (must actually) use indexes.
Create indexes on all columns with which WHERE or ORDER BY is going to be used. Also study and use EXPLAIN to see the impact of the indexes and to optimize your queries.
You don't have to create a primary key if there is no column with unique data in your table, but it is very likely that you do have such a column (id, time...). In this case you should use primary key to filter your queries.
Number of columns in the query has close to no impact on SELECT speed.
As long as you make "Only select queries" storage engine does not matter either. MyISAM might be a bit faster, but InnoDB has many features you will need when you decide that your "Only select queries" rule must be broken.
I've got a three col table. It has a unique index, and another two (for two different columnts) for faster queries.
+-------------+-------------+----------+
| category_id | related_id | position |
+-------------+-------------+----------+
Sometimes the query is
SELECT * FROM table WHERE category_id = foo
and sometimes it's
SELECT * FROM table WHERE related_id = foo
So I decided to make both category_id and related_id an index for better performance. Is this bad practice? What are the downsides of this approach?
In the case I already have 100.000 rows in that table, and am inserting another 100.000, will it be an overkill. having to refresh the index with every new insert? Would that operation then take too long? Thanks
There are no downsides if it's doing exactly what you want, you query on a specific column a lot, so you make that column indexed, that's the whole point. Now you have a 60 column table and your adding indexes to columns you never query on then you are wasting resources because those indexes need to be maintained on INSERT/UPDATE/DELETE operations.
If you have created index for each column then you will definitely get benefit out of it.
Don't go for composite indexes (Multiple coulmn indexes).
You yourself can see the advantage of index in your query by using EXPLAIN (statement provides information about how MySQL executes statements).
EXAMPLE:
EXPLAIN SELECT * FROM table WHERE category_id = foo;
Hope this will help.
~K
Its good to have indexes. Just understand that indexes would take more disk space, but faster search.
It is in your best interest to index those fields which have less repeated values. For eg. Indexing a field that contains a Boolean flag might not be a good idea.
Since in your case you are having an id, hence I think you won't be having any problem in keeping the indexes that you have created.
Also, the inserts would be slower, but since you are saving id's there won't be much of a difference in the time required to insert. Go ahead and do the insert.
My personal advice :
When you are inserting large number of rows in a single table in one go, don't insert them using a single query, unless mandatory. This would prevent your table from getting locked and inaccessible for a long time.
I have a mysql 5.5 server with a big (~150M records) innodb table that I want to partition.
How can I determine the optimal number of partitions for the table?
The table is a "many-to-many" connection table that consists of only two int columns (aId, bId),
aId receives values roughly on the range of 1..1,000,000
bId receives values roughly on the range of 1..10,000,000
Most queries lookup aId first.
the table has two indexes:
primary(aId, bId)
index(bId)
And, again, the question is How can I determine the optimal number of partitions for the table?
Thanks
If you do not want to structure your data logically, then optimal number of partitions is the same as number of cores on the server. This will aid multiple concurrent queries. Individual queries will run the same time due to using primary index.
I would suggest changing your second index to index (bld,ald), so bld to ald queries can use only index to get ald value, without going to the database for it.
I have a query of the following form:
SELECT * FROM MyTable WHERE Timestamp > [SomeTime] AND Timestamp < [SomeOtherTime]
I would like to optimize this query, and I am thinking about putting an index on timestamp, but am not sure if this would help. Ideally I would like to make timestamp a clustered index, but MySQL does not support clustered indexes, except for primary keys.
MyTable has 4 million+ rows.
Timestamp is actually of type INT.
Once a row has been inserted, it is never changed.
The number of rows with any given Timestamp is on average about 20, but could be as high as 200.
Newly inserted rows have a Timestamp that is greater than most of the existing rows, but could be less than some of the more recent rows.
Would an index on Timestamp help me to optimize this query?
No question about it. Without the index, your query has to look at every row in the table. With the index, the query will be pretty much instantaneous as far as locating the right rows goes. The price you'll pay is a slight performance decrease in inserts; but that really will be slight.
You should definitely use an index. MySQL has no clue what order those timestamps are in, and in order to find a record for a given timestamp (or timestamp range) it needs to look through every single record. And with 4 million of them, that's quite a bit of time! Indexes are your way of telling MySQL about your data -- "I'm going to look at this field quite often, so keep an list of where I can find the records for each value."
Indexes in general are a good idea for regularly queried fields. The only downside to defining indexes is that they use extra storage space, so unless you're real tight on space, you should try to use them. If they don't apply, MySQL will just ignore them anyway.
I don't disagree with the importance of indexing to improve select query times, but if you can index on other keys (and form your queries with these indexes), the need to index on timestamp may not be needed.
For example, if you have a table with timestamp, category, and userId, it may be better to create an index on userId instead. In a table with many different users this will reduce considerably the remaining set on which to search the timestamp.
...and If I'm not mistaken, the advantage of this would be to avoid the overhead of creating the timestamp index on each insertion -- in a table with high insertion rates and highly unique timestamps this could be an important consideration.
I'm struggling with the same problems of indexing based on timestamps and other keys. I still have testing to do so I can put proof behind what I say here. I'll try to postback based on my results.
A scenario for better explanation:
timestamp 99% unique
userId 80% unique
category 25% unique
Indexing on timestamp will quickly reduce query results to 1% the table size
Indexing on userId will quickly reduce query results to 20% the table size
Indexing on category will quickly reduce query results to 75% the table size
Insertion with indexes on timestamp will have high overhead **
Despite our knowledge that our insertions will respect the fact of have incrementing timestamps, I don't see any discussion of MySQL optimisation based on incremental keys.
Insertion with indexes on userId will reasonably high overhead.
Insertion with indexes on category will have reasonably low overhead.
** I'm sorry, I don't know the calculated overhead or insertion with indexing.
If your queries are mainly using this timestamp, you could test this design (enlarging the Primary Key with the timestamp as first part):
CREATE TABLE perf (
, ts INT NOT NULL
, oldPK
, ... other columns
, PRIMARY KEY(ts, oldPK)
, UNIQUE (oldPK)
) ENGINE=InnoDB ;
This will ensure that the queries like the one you posted will be using the clustered (primary) key.
Disadvantage is that your Inserts will be a bit slower. Also, If you have other indices on the table, they will be using a bit more space (as they will include the 4-bytes wider primary key).
The biggest advantage of such a clustered index is that queries with big range scans, e.g. queries that have to read large parts of the table or the whole table will find the related rows sequentially and in the wanted order (BY timestamp), which will also be useful if you want to group by day or week or month or year.
The old PK can still be used to identify rows by keeping a UNIQUE constraint on it.
You may also want to have a look at TokuDB, a MySQL (and open source) variant that allows multiple clustered indices.
Say I have a large table, about 2 million rows and 50 columns. Using MySQL, how efficient would it be to search an entire column for one particular value, and then return the row number of said value? (Assume random distribution of values throughout the entire column)
If an operation like this takes an extended amount of time, what can I do to speed it up?
If the column in question is indexed, then it's pretty fast.
Don't be cavalier with indexes, though. The more indexes you have, the more expensive your writes will be (inserts/updates/deletes). Also, they take up disk space and RAM (and can easily be larger than the table itself). Indexes are good for querying, bad for writing. Choose wisely.
Exactly how fast we're talking here? This depends on configuration of your DB machine. If it doesn't have enough RAM to host indexes and data, operation may become disk-bound and performance will be reduced. Equally will be reduced operation without index. Assuming machine is fine, this further depends on how selective your index is. If you have a table with 10M rows and you index column with boolean values, you will get only a slight increase in performance. If, otherwise, you index a column with many-many different values (user emails), query will be orders of magnitude faster.
Also, by modern standards, table with 2M rows is rather small :-)
The structure of the data makes a big difference here, because it will affect your ability to index. Have a look at mysql indexing options (fulltext, etc).
There is no easy answer to that question, it depends on more parameters about your data. As many others have advised you already, creating an index on the column you have to search (for an exact match, or starting with a string) will be quite efficient.
As an example, I have a MyISAM table with 27,000,000 records (6.7 GB in size) which holds an index on a VARCHAR(128) field.
Here are two sample queries (real data) to give you an idea:
mysql> SELECT COUNT(*) FROM Books WHERE Publisher = "Hachette";
+----------+
| COUNT(*) |
+----------+
| 15072 |
+----------+
1 row in set (0.12 sec)
mysql> SELECT Name FROM Books WHERE Publisher = "Scholastic" LIMIT 100;
...
100 rows in set (0.17 sec)
So yes, I think MySQL is definitely fast enough to do what you're planning to do :)
Create an index on that column.
Create an index on the column in question and performance should not be a problem.
In general - add an index on the column