Duplicate table fields vs indexing only - mysql

I have a huge and very busy table (few thousands INSERT / second). The table stores loginlogs, it has a bigint ID which is not generated by MySQL but rather by pseudorandom generator on MySQL client.
Simply put, the table has loginlog_id, client_id, tons,of,other,columns,with,details,about,session....
I have few indexes on this table such as PRIMARY_KEY(loginlog_id) and INDEX(client_id)
In some other part of our system I need to fetch client_id based on loginlog_id. This does not happen that often (just few hundreds SELECT client_id FROM loginlogs WHERE loginlog_id=XXXXXX / second). Table loginlogs is read by various other scripts now and then, and always various columns are needed. But the most frequent call to read is for sure the above mentioned get client_id by loginlog_id.
My question is: should I create another table loginlogs_clientids and duplicate loginlog_id, client_id in there (this means another few thousands INSERTS, as for every loginlogs INSERT I get this new one). Or should I be happy with InnoDB handling my lookups by PRIMARY KEY efficiently.
We have tons of RAM (128GB, most of which is used by MySQL). Load of MySQL is between 40% and 350% CPU (we have 12 core CPU). When I tried to use the new table, I did not see any difference. But I am asking for the future, if our usage grows even more, what is the suggested approach? Duplicate or index?
Thanks!

No.
Looking up table data for a single row using the primary key is extremely efficient, and will take the same time for both tables.
Exceptions to that might be very large row sizes (e.g. 8KB+), and client_id is e.g. a varchar that is stored off-page, in which case you might need to read an additional data block, which at least theoretically could cost you some milliseconds.
Even if this strategy would have an advantage, you would not actually do it by creating a new table, but by adding an index (loginlog_id, client_id) to your original table. InnoDB stores everything, including the actual data, in an index structure, so that adding an index is basically the same as adding a new table with the same columns, but without (you) having the problem of synchronizing those two "tables".
Having a structure with a smaller row size can have some advantages for ranged scans, e.g. MySQL will evaluate select count(*) from tablename using the smallest index of the table, as it has to read less bytes. You already have such a small index (on client_id), so even in that regard, adding such an additonal table/index shouldn't have an effect. If you have any range scan on the primary key (which is probably unlikely for pseudorandom data), you may want to consider this though, or keep it in mind for cases when you have.

Related

Does it make sense to split a large table into smaller ones to reduce the number of rows (not columns)? [duplicate]

rails app, I have a table, the data already has hundreds of millions of records, I'm going to split the table to multiple tables, this can speed up the read and write.
I found this gem octopus, but he is a master/slave, I just want to split the big table.
or what can I do when the table too big?
Theoretically, a properly designed table with just the right indexes will be able to handle very large tables quite easily. As the table grows the slow down in queries and insertion of new records is supposed to be negligible. But in practice we find that it doesn't always work that way! However the solution definitely isn't to split the table into two. The solution is to partition.
Partitioning takes this notion a step further, by enabling you to
distribute portions of individual tables across a file system
according to rules which you can set largely as needed. In effect,
different portions of a table are stored as separate tables in
different locations. The user-selected rule by which the division of
data is accomplished is known as a partitioning function, which in
MySQL can be the modulus, simple matching against a set of ranges or
value lists, an internal hashing function, or a linear hashing
function.
If you merely split a table your code is going to become inifinitely more complicated, each time you do an insert or a retrieval you need to figure out which split you should run that query on. When you use partitions, mysql takes care of that detail for you an as far as the application is concerned it's still one table.
Do you have an ID on each row? If the answer is yes, you could do something like:
CREATE TABLE table2 AS (SELECT * FROM table1 WHERE id >= (SELECT COUNT(*) FROM table1)/2);
The above statement creates a new table with half of the records from table1.
I don't know if you've already tried, but an index should help in speed for a big table.
CREATE INDEX index_name ON table1 (id)
Note: if you created the table using unique constraint or primary key, there's already an index.

mySQL indexes, when is too many rows too much?

I am currently working on a new system looking at data stored by a CMS about user access logs.
The current table I am looking at extracting data from is currently 5 million rows. This is data spanning about about 11 months. The SQL queries I am making are usually searching on something like uid which is an indexed column.
The question I have out of interest and scalablity is how large does a table need to get when even indexed columns don't speed up searches?
Indexes will always be faster if the table is mostly read. If you expect writes to scale faster than reads, then updating the index may become more expensive than it's worth.
If uid is your primary key, then it will always be indexed and there's really no overhead for this index since MySQL needs a key for each row anyway.
Proper indexes will always speed up queries...that's the point of them. It doesn't matter how large your table is, the point of the index is to provide the DBMS with an avenue of retrieving a subset of a table faster than if it had to read through the entire table row by row.

Removing a Primary Key (Clustered Index) to increase Insert performance

We've been experiencing SQL timeouts and have identified that bottleneck to be an audit table - all tables in our system contain insert, update and delete triggers which cause a new audit record.
This means that the audit table is the largest and busiest table in the system. Yet data only goes in, and never comes out (under this system) so no select performance is required.
Running a select top 10 returns recently insert records rather than the 'first' records. order by works, of course, but I would expect that a select top should return rows based on their order on the disc - which I'd expect would return the lowest PK values.
It's been suggested that we drop the clustered index, and in fact the primary key (unique constraint) as well. As I mentioned earlier there's no need to select from this table within this system.
What sort of performance hit does a clustered index create on a table? What are the (non-select) ramifications of having an unindexed, unclustered, key-less table? Any other suggestions?
edit
our auditing involves CLR functions and I am now benchmarking with & without PK, indexes, FKs etc to determine the relative cost of the CLR functions & the contraints.
After investigation, the poor performance was not related to the insert statements but instead the CLR function which orchestrated the auditing. After removing the CLR and instead using a straight TSQL proc, performance improved 20-fold.
During the testing I've also determined that the clustered index and identity columns make little or no difference to the insert time, at least relative to any other processing that takes place.
// updating 10k rows in a table with trigger
// using CLR function
PK (identity, clustered)- ~78000ms
No PK, no index - ~81000ms
// using straight TSQL
PK (identity, clustered) - 2174ms
No PK, no index - 2102ms
According to Kimberly Tripp - the Queen of Indexing - having a clustered index on a table actually helps INSERT performance:
The Clustered Index Debate Continued
Inserts are faster in a clustered table (but only in the "right"
clustered table) than compared to a heap. The primary problem here is
that lookups in the IAM/PFS to determine the insert location in a heap
are slower than in a clustered table (where insert location is known,
defined by the clustered key). Inserts are faster when inserted into a
table where order is defined (CL) and where that order is
ever-increasing.
Source: blog post called The Clustered Index Debate Continues....
A great test script and description of this scenarion is available on Tibor Karaszi's blog at SQLblog.com
My numbers don't entirely match his - I see more difference on a batch statement than I do with per-row statements.
With the row count around one million I fairly consistently get a single-row insert loop on clustered index to perform slightly faster than on a non-indexed (clustered taking approximately 97% as long as non-indexed).
Conversely the batch insert (10000 rows) is faster into a non-indexed rather than clustered index (anything from 75%-85% of the clustered insert time).
clustered - loop - 1689
heap - loop - 1713
clustered - one statement - 85
heap - one statement - 62
He describes what's happening on each insert:
Heap: SQL Server need to find where the row should go. For this it
uses one or more IAM pages for the heap, and it cross references these
to one or more PFS pages for the database file(s). IMO, there should
be potential for a noticable overhead here. And even more, with many
users hammering the same table I can imagine blocking (waits) against
the PFS and possibly also IAM pages.
Clustered table: Now, this is dead simple. SQL server navigates the
clustered index tree and find where the row should go. Since this is
an ever increasing index key, each row will go to the end of the table
(linked list).
A table without a key? Not even an auto-incrementing surrogate key? :(
As long as the key is monotonically increasing the index maintenance upon insert should be good -- it's just "added at the end". The "clustered" just means the physical layout of the table follows the index (as the data is part of the index). As long as the index isn't fragmented (see monotonically increasing bit) then the cluster itself/data won't be logically fragmented and this shouldn't be a performance issue. (If there are updates then the clustering is a slightly different story: the record updated may "grow" and cause fragmentation.)
My suggestion is, if that is the chosen route then ... benchmark it with realistic data/load and then decide if such suggestions are warranted. It would be nice to see is this change was decided upon, and why.
Happy coding.
Also, any reliance upon order excepting that from an ORDER BY is flawed by design. It may work now, but it is an implementation detail and may change in subtle ways (as simple as a different query plan). With the auto-increment key, an ORDER BY DESC would always produce the correct result (bear in mind that auto-incremeent IDs can be skipped, but unless "reset" they will always be increasing based on insert order).
My primitive understanding is that even INSERT operations are usually faster with a clustered index, than with a heap. Additionally, disk-space requirements are lower with clustered indexes.
Some interesting tests / scenarios that might shed some light for your particular circumstance: http://technet.microsoft.com/en-us/library/cc917672.aspx.

MySQL: low cardinality/selectivity columns = how to index?

I need to add indexes to my table (columns) and stumbled across this post:
How many database indexes is too many?
Quote:
“Having said that, you can clearly add a lot of pointless indexes to a table that won't do anything. Adding B-Tree indexes to a column with 2 distinct values will be pointless since it doesn't add anything in terms of looking the data up. The more unique the values in a column, the more it will benefit from an index.”
Is an Index really pointless if there are only two distinct values? Given a table as follows (MySQL Database, InnoDB)
Id (BIGINT)
fullname (VARCHAR)
address (VARCHAR)
status (VARCHAR)
Further conditions:
The Database contains 300 Million records
Status can only be “enabled” and “disabled”
150 Million records have status= enabled and 150 Million records have
stauts= disabled
My understanding is, without having an index on status, a select with where status=’enabled’ would result in a full tablescan with 300 Million Records to process?
How efficient is the lookup when I use a BTREE index on status?
Should I index this column or not?
What alternatives (maybe any other indexes) does MySQL InnoDB provide to efficiently look records up by the "where status="enabled" clause in the given example with a very low cardinality/selectivity of the values?
The index that you describe is pretty much pointless. An index is best used when you need to select a small number of rows in comparison to the total rows.
The reason for this is related to how a database accesses a table. Tables can be assessed either by a full table scan, where each block is read and processed in turn. Or by a rowid or key lookup, where the database has a key/rowid and reads the exact row it requires.
In the case where you use a where clause based on the primary key or another unique index, eg. where id = 1, the database can use the index to get an exact reference to where the row's data is stored. This is clearly more efficient than doing a full table scan and processing every block.
Now back to your example, you have a where clause of where status = 'enabled', the index will return 150m rows and the database will have to read each row in turn using separate small reads. Whereas accessing the table with a full table scan allows the database to make use of more efficient larger reads.
There is a point at which it is better to just do a full table scan rather than use the index. With mysql you can use FORCE INDEX (idx_name) as part of your query to allow comparisons between each table access method.
Reference:
http://dev.mysql.com/doc/refman/5.5/en/how-to-avoid-table-scan.html
I'm sorry to say that I do not agree with Mike. Adding an index is meant to limit the amount of full records searches for MySQL, thereby limiting IO which usually is the bottleneck.
This indexing is not free; you pay for it on inserts/updates when the index has to be updated and in the search itself, as it now needs to load the index file (full text index for 300M records is probably not in memory). So it might well be that you get extra IO in stead of limitting it.
I do agree with the statement that a binary variable is best stored as one, a bool or tinyint, as that decreases the length of a row and can thereby limit disk IO, also comparisons on numbers are faster.
If you need speed and you seldom use the disabled records, you may wish to have 2 tables, one for enabled and one for disabled records and move the records when the status changes. As it increases complexity and risk this would be my very last choice of course. Definitely do the move in 1 transaction if you happen to go for it.
It just popped into my head that you can check wether an index is actually used by using the explain statement. That should show you how MySQL is optimizing the query. I don't really know hoe MySQL optimizes queries, but from postgresql I do know that you should explain a query on a database approximately the same (in size and data) as the real database. So if you have a copy on the database, create an index on the table and see wether it's actually used. As I said, I doubt it, but I most definitely don't know everything:)
If the data is distributed like 50:50 then query like where status="enabled" will avoid half scanning of the table.
Having index on such tables is completely depends on distribution of data, i,e : if entries having status enabled is 90% and other is 10%. and for query where status="disabled" it scans only 10% of the table.
so having index on such columns depends on distribution of data.
#a'r answer is correct, however it needs to be pointed out that the usefulness of an index is given not only by its cardinality but also by the distribution of data and the queries run on the database.
In OP's case, with 150M records having status='enabled' and 150M having status='disabled', the index is unnecessary and a waste of resource.
In case of 299M records having status='enabled' and 1M having status='disabled', the index is useful (and will be used) in queries of type SELECT ... where status='disabled'.
Queries of type SELECT ... where status='enabled' will still run with a full table scan.
You will hardly need all 150 mln records at once, so I guess "status" will always be used in conjunction with other columns. Perhaps it'd make more sense to use a compound index like (status, fullname)
Jan, you should definitely index that column. I'm not sure of the context of the quote, but everything you said above is correct. Without an index on that column, you are most certainly doing a table scan on 300M rows, which is about the worst you can do for that data.
Jan, as asked, where your query involves simply "where status=enabled" without some other limiting factor, an index on that column apparently won't help (glad to SO community showed me what's up). If however, there is a limiting factor, such as "limit 10" an index may help. Also, remember that indexes are also used in group by and order by optimizations. If you are doing "select count(*),status from table group by status", an index would be helpful.
You should also consider converting status to a tinyint where 0 would represent disabled and 1 would be enabled. You're wasting tons of space storing that string vs. a tinyint which only requires 1 byte per row!
I have a similar column in my MySQL database. Approximately 4 million rows, with the distribution of 90% 1 and 10% 0.
I've just discovered today that my queries (where column = 1) actually run significantly faster WITHOUT the index.
Foolishly I deleted the index. I say foolishly, because I now suspect the queries (where column = 0) may have still benefited from it. So, instead I should explicitly tell MySQL to ignore the index when I'm searching for 1, and to use it when I'm searching for 0. Maybe.

Maximum table size for a MySQL database

What is the maximum size for a MySQL table? Is it 2 million at 50GB? 5 million at 80GB?
At the higher end of the size scale, do I need to think about compressing the data? Or perhaps splitting the table if it grew too big?
I once worked with a very large (Terabyte+) MySQL database. The largest table we had was literally over a billion rows.
It worked. MySQL processed the data correctly most of the time. It was extremely unwieldy though.
Just backing up and storing the data was a challenge. It would take days to restore the table if we needed to.
We had numerous tables in the 10-100 million row range. Any significant joins to the tables were too time consuming and would take forever. So we wrote stored procedures to 'walk' the tables and process joins against ranges of 'id's. In this way we'd process the data 10-100,000 rows at a time (Join against id's 1-100,000 then 100,001-200,000, etc). This was significantly faster than joining against the entire table.
Using indexes on very large tables that aren't based on the primary key is also much more difficult. Mysql stores indexes in two pieces -- it stores indexes (other than the primary index) as indexes to the primary key values. So indexed lookups are done in two parts: First MySQL goes to an index and pulls from it the primary key values that it needs to find, then it does a second lookup on the primary key index to find where those values are.
The net of this is that for very large tables (1-200 Million plus rows) indexing against tables is more restrictive. You need fewer, simpler indexes. And doing even simple select statements that are not directly on an index may never come back. Where clauses must hit indexes or forget about it.
But all that being said, things did actually work. We were able to use MySQL with these very large tables and do calculations and get answers that were correct.
About your first question, the effective maximum size for the database is usually determined by operating system, specifically the file size MySQL Server will be able to create, not by MySQL Server itself. Those limits play a big role in table size limits. And MyISAM works differently from InnoDB. So any tables will be dependent on those limits.
If you use InnoDB you will have more options on manipulating table sizes, resizing the tablespace is an option in this case, so if you plan to resize it, this is the way to go. Give a look at The table is full error page.
I am not sure the real record quantity of each table given all necessary information (OS, Table type, Columns, data type and size of each and etc...) And I am not sure if this info is easy to calculate, but I've seen simple table with around 1bi records in a couple cases and MySQL didn't gave up.