Update on Index Column [duplicate] - mysql

I would like to know a few things regarding mysql architecture.
1. How sql process insert, delete, update operations in an indexed table?
2. It is said that changes are only made in the change buffer when the index page is not in the buffer pool. So if changes are made after the buffer pool loads the concerned index page, then it has to alter the same page in disk as well. right? So an operation has to be done in three different places?
3. How NULL values are indexed? where would they be stored in a b+tree?
4. If we update a data which is the clustered index, then when will it be updated in the disk?
5. What happens during bulk loading?

How to process insert/update/delete...
Fetch (and cache) index block(s) needed for locating the row(s) to be updated/deleted, or the blocks where new row(s) will be inserted.
Fetch the data block(s). Note that all indexes include the PRIMARY KEY, which is clustered with the data.
Modify the data block(s) to reflect the changes. Also deal with remembering the old data -- in case of an eventual ROLLBACK.
Update unique index blocks (that includes the PK).
Store non-unique index changes in the change buffer. (As you noted.)
The change buffer is designed to be a 'transparent' to the actual index blocks.
A lookup by an index will always 'do the right thing', whether the entry is in the CB or not.
Folding of CB entries back into actual index blocks is done in the 'background' and/or when running out of room. (The CB defaults to 1/4 of the buffer_pool, I think.)
Sufficient information is stored in the transaction log, such that a crash will not the loss of pending index updates.
Clearly the CB was invented for performance. An index update can be delayed, and meanwhile, takes a lot less space (often only a few dozen bytes) than the index block (16KB) that needs updating. Multiple changes (usually) can be applied to a single index block -- This is the main savings. But note, because of randomness, UUIDs, MD5, etc, cannot make good use of the CB. A non-unique index on the current datetime/timestamp is a case where the CB's buffering really shines.
(Sorry, my knowledge of the CB is a bit vague for the level at which you are asking. I suggest you read the code.)
NULL... I believe that is treated as a separate value that sorts before all non-null values in the B+Tree. But to confuse the issue, there is a flag determining whether nulls are treated as equal to each other. And there are restrictions on PRIMARY/UNIQUE keys.
Related to NULL... When doing PARTITION BY RANGE on some variant/function of DATE or DATETIME, invalid dates turn into NULL, which is explicitly stored in the 'first' partition. Newbies are often puzzled as to why partition pruning does not seem to work. (Recommended partial workaround: have a 'first' partition that is otherwise empty.)
Clustered and UNIQUE indexes... All(?) write operations must check all unique indexes, hence the CB is not involved with such. Note: In InnoDB, the PRIMARY KEY is always clustered and unique and cannot(?) have NULLs.
Bulk loading... I find that a 100-row INSERT will run 10 times as fast as 100 individual INSERTs. (This is due to parsing, etc.) But at the low level, a batch insert or LOAD DATA is just a bunch of individual inserts. So, the above discussion applies.
Bonus answers...
"IODKU" (INSERT ... ON DUPLICATE KEY UPDATE) is pretty much follows the 1..5 steps above. In locating the row to update, it discovers whether to update or insert, then proceeds accordingly.
REPLACE is really a shorthand for DELETE, plus UPDATE. But note this anomaly... If there are two unique keys on the table, a one-row REPLACE might delete 2 rows before inserting the 1 row.

Related

What happens during the insertion, deletion and update in sql?

I would like to know a few things regarding mysql architecture.
1. How sql process insert, delete, update operations in an indexed table?
2. It is said that changes are only made in the change buffer when the index page is not in the buffer pool. So if changes are made after the buffer pool loads the concerned index page, then it has to alter the same page in disk as well. right? So an operation has to be done in three different places?
3. How NULL values are indexed? where would they be stored in a b+tree?
4. If we update a data which is the clustered index, then when will it be updated in the disk?
5. What happens during bulk loading?
How to process insert/update/delete...
Fetch (and cache) index block(s) needed for locating the row(s) to be updated/deleted, or the blocks where new row(s) will be inserted.
Fetch the data block(s). Note that all indexes include the PRIMARY KEY, which is clustered with the data.
Modify the data block(s) to reflect the changes. Also deal with remembering the old data -- in case of an eventual ROLLBACK.
Update unique index blocks (that includes the PK).
Store non-unique index changes in the change buffer. (As you noted.)
The change buffer is designed to be a 'transparent' to the actual index blocks.
A lookup by an index will always 'do the right thing', whether the entry is in the CB or not.
Folding of CB entries back into actual index blocks is done in the 'background' and/or when running out of room. (The CB defaults to 1/4 of the buffer_pool, I think.)
Sufficient information is stored in the transaction log, such that a crash will not the loss of pending index updates.
Clearly the CB was invented for performance. An index update can be delayed, and meanwhile, takes a lot less space (often only a few dozen bytes) than the index block (16KB) that needs updating. Multiple changes (usually) can be applied to a single index block -- This is the main savings. But note, because of randomness, UUIDs, MD5, etc, cannot make good use of the CB. A non-unique index on the current datetime/timestamp is a case where the CB's buffering really shines.
(Sorry, my knowledge of the CB is a bit vague for the level at which you are asking. I suggest you read the code.)
NULL... I believe that is treated as a separate value that sorts before all non-null values in the B+Tree. But to confuse the issue, there is a flag determining whether nulls are treated as equal to each other. And there are restrictions on PRIMARY/UNIQUE keys.
Related to NULL... When doing PARTITION BY RANGE on some variant/function of DATE or DATETIME, invalid dates turn into NULL, which is explicitly stored in the 'first' partition. Newbies are often puzzled as to why partition pruning does not seem to work. (Recommended partial workaround: have a 'first' partition that is otherwise empty.)
Clustered and UNIQUE indexes... All(?) write operations must check all unique indexes, hence the CB is not involved with such. Note: In InnoDB, the PRIMARY KEY is always clustered and unique and cannot(?) have NULLs.
Bulk loading... I find that a 100-row INSERT will run 10 times as fast as 100 individual INSERTs. (This is due to parsing, etc.) But at the low level, a batch insert or LOAD DATA is just a bunch of individual inserts. So, the above discussion applies.
Bonus answers...
"IODKU" (INSERT ... ON DUPLICATE KEY UPDATE) is pretty much follows the 1..5 steps above. In locating the row to update, it discovers whether to update or insert, then proceeds accordingly.
REPLACE is really a shorthand for DELETE, plus UPDATE. But note this anomaly... If there are two unique keys on the table, a one-row REPLACE might delete 2 rows before inserting the 1 row.

How can I detect if an MySQL index is necessary or required?

How can I detect if an MySQL index is necessary or required?
We have the idea that some queries can be improved. And I know that I can dive in slow query logs ... but I ran across the post below for MS SQL and was wondering if there is an easy way of analyzing if an index is required (and will give immediate speed improvements) for the current MySQL database.
Help appreciated
Resource for MS SQL: https://dba.stackexchange.com/questions/56/how-to-determine-if-an-index-is-required-or-necessary
You can't.
There are ways to detect, over a period of time, whether an index is used. But there is no way to be sure that an index is not used. Let's say you have a once-a-month task that does some major maintenance on the table. And you really need a certain index to keep the task from locking the table and bringing down the application. If you checked for index usage for most of the month, but failed to include that usage, you might decide that you don't need the index. Then you would drop the index... and be sorry. (This is a real anecdote.)
Meanwhile, there are some simplistic rules about indexes...
INDEX(a) is unnecessary if you also have INDEX(a,b).
INDEX(id) is unnecessary if you also have PRIMARY KEY(id) or UNIQUE(id).
An index with 5 or more columns may be used, but is unlikely to be "useful". (Shorten it.)
INDEX(a), INDEX(b) is not the same as INDEX(a,b).
INDEX(b,a) is not the same as INDEX(a,b); you may need both.
INDEX(flag), where flag has a small number of distinct values, will probably never be used -- the optimizer will scan the table instead.
In many cases, "prefix" indexing (INDEX(foo(10))) is useless. (But there are many exceptions.)
"I indexed every column" -- a bad design pattern.
Often, but not always, having both a PRIMARY KEY and a UNIQUE key means that something is less than optimal.
InnoDB tables really should have an explicit PRIMARY KEY.
InnoDB implicitly include the PK in any secondary key. So, given PRIMARY KEY(id), INDEX(foo) is really INDEX(foo, id).
Sometimes the Optimizer will ignore the WHERE clause and use an index for the ORDER BY.
Some queries have such skewed properties that the Optimizer will use a different index depending on different constants. (I have literally see as many as 6 different explain plans for one query.)
"Index merge intersect" is almost always not as good as a composite index.
There are exceptions to most of these tips.
So, I prefer to take all the queries (SELECTs, UPDATEs, and DELETEs), decide on the optimal index for each, eliminate redundancies, etc, in order to find the "best" set of indexes. See my cookbook on creating an index, given a SELECT.
You should definitely spend some time reading up on indexing, there's a lot written about it, and it's important to understand what's going on.
Broadly speaking, and index imposes an ordering on the rows of a table.
For simplicity's sake, imagine a table is just a big CSV file. Whenever a row is inserted, it's inserted at the end. So the "natural" ordering of the table is just the order in which rows were inserted.
Imagine you've got that CSV file loaded up in a very rudimentary spreadsheet application. All this spreadsheet does is display the data, and numbers the rows in sequential order.
Now imagine that you need to find all the rows that has some value "M" in the third column. Given what you have available, you have only one option. You scan the table checking the value of the third column for each row. If you've got a lot of rows, this method (a "table scan") can take a long time!
Now imagine that in addition to this table, you've got an index. This particular index is the index of values in the third column. The index lists all of the values from the third column, in some meaningful order (say, alphabetically) and for each of them, provides a list of row numbers where that value appears.
Now you have a good strategy for finding all the rows where the value of the third column is "M". For instance, you can perform a binary search! Whereas the table scan requires you to look N rows (where N is the number of rows), the binary search only requires that you look at log-n index entries, in the very worst case. Wow, that's sure a lot easier!
Of course, if you have this index, and you're adding rows to the table (at the end, since that's how our conceptual table works), you need to update the index each and every time. So you do a little more work while you're writing new rows, but you save a ton of time when you're searching for something.
So, in general, indexing creates a tradeoff between read efficiency and write efficiency. With no indexes, inserts can be very fast -- the database engine just adds a row to the table. As you add indexes, the engine must update each index while performing the insert.
On the other hand, reads become a lot faster.

How does MySQL determine if an INSERT is unique?

I would like to know if there is an implicit SELECT being run prior to performing an INSERT on a table that has any column defined as UNIQUE. I cannot find anything about this in the documentation for INSERT.
I have asked some other questions that nobody seems to be able to answer - perhaps because I'm not properly explaining myself - that are related to the above question.
If I understand correctly, then I assume the following would be true:
CASE 1:
You have a table with 1 billion rows. Each row has a UUID column which is unique. If you perform an insert the server must do some kind of implicit SELECT COUNT(*) FROM table WHERE UUID = [new uuid] and determine if the count is 0 or 1. Correct?
CASE 2:
You have a table with 1 billion rows. Each row has a composite unique key consisting of a DATE and a UUID. If you perform an insert the server must do some kind of implicit SELECT COUNT(*) FROM table WHERE DATE = [date] AND UUID = [new uuid] and check if the count is 0 or 1. Yes?
I use the word implicit because at some point, somewhere in the process, the server MUST be checking the value. If not it would require that the laws of physics dictate that two identical rows cannot exist - and as far as I'm informed physics don't play a big role when it comes to the uniqueness of numbers written down somewhere, in binary, on a magnetic disk in a computer.
Let's assume your 1 billion rows are equally and sequentially distributed across 2,000 different dates. Would this not mean that case 2 would perform the insert faster because it can look up the UUIDs segmented into dates? If not, then would it be better to use case 1 for insert speed - and in that case, why?
This question is theoretical, so don't bother with considering regular SELECT performance in this case. The primary key wouldn't be the UUID+DATE index.
As a response to comments: The UUID in my case is designed solely for the purpose of avoiding duplicate entries because of bad connections. Since you cannot make the same entry for a different date twice (without it logically being a new entry), the UUID does not need to be globally unique - it needs only be unique for each date. This is why I can permit it being part of a composite key.
There are a few flaws and misconceptions in the previous answers; rather than point them out, I will start from scratch.
Referring to InnoDB only...
An INDEX (including UNIQUE and PRIMARY KEY) is a BTree. BTrees are very efficient a locating one row based on the key the BTree is sorted on. (It is also efficient at scanning in key-order.) The "fan out" of a typical BTree in MySQL is on the order of 100. So, for a million rows, the BTree is about 3 levels deep (log100(million)); for a trillion rows, it is only twice as deep (approximately). So, even if nothing is cached, it takes only 3 disk hits to locate one particular row in a million-row index.
I am being loose here with "index" versus "table" because they are essentially the same (in InnoDB, at least). Both are BTrees. What differs is what is in the leaf nodes: The leaf nodes of a table BTree has all the columns. (I am ignoring the off-block storage for TEXT/BLOB in InnoDB.) An INDEX (other than the PRIMARY KEY) has a copy of the PRIMARY KEY in the leaf node. This is how a secondary key can get from the INDEX BTree to the rest of the row's columns, and how InnoDB does not have to store multiple copies of all the columns.
The PRIMARY KEY is "clustered" with the data. That is one BTree contains both all the columns of all the rows, and it is ordered according to the PRIMARY KEY specification.
Locating a record by PRIMARY KEY is one BTree search. Locating a record by a SECONDARY KEY is two BTree searches, one in the secondary INDEX's BTree which gives you the PRIMARY KEY; then a second one to drill down the data/PK BTree.
PRIMARY KEY(UUID)... Since the UUID is very random, the "next" row you INSERT will be located at a 'random' spot. If the table is much bigger than be cached in the buffer_pool, the block the new row needs to go into is very likely to not be cached. This leads to a disk hit to pull the block into cache (the buffer pool), and eventually another disk hit to write it back to disk.
Since a PRIMARY KEY is a UNIQUE KEY, something else is going on at the same time (No SELECT COUNT(*) etc). The UNIQUEness is checked after the block is fetched and before deciding whether to give a "duplicate key" error, or to store the row. Also, if the block is "full" then the block will need to be 'split' to make room for the new row.
INDEX(UUID) or UNIQUE(UUID)... There is a BTree for that index. On INSERT, some randomly located block will need to be fetched, modified, possibly split, and written back to disk, very much like the PK discussion above. If you had UNIQUE(UUID), there would also be a check for UNIQUEness and possibly an error message. In either case, there is, now and/or later, disk I/O.
AUTO_INCREMENT PK... If the PRIMARY KEY is an auto_increment, then new records are added to the 'last' block in the data BTree. When it gets full (every 100 or so records) there is (logically) a block split and flush of the old block to disk. (Actually, the I/O is probably delayed and done in the background.)
PRIMARY KEY(id) + UNIQUE(UUID)... Two BTrees. On an INSERT, there is activity in both. This is likely to be worse than simply PRIMARY KEY(UUID). Add up the disk hits above to see what I mean.
"Disk hits" are the killer in huge tables, and especially with UUIDs. "Count the disk hits" to get a feel for performance, especially when comparing two possible techniques.
Now for your secret sauce... PRIMARY KEY(date, UUID)... You are allowing the same UUID to show up on two different days. This can help! Back to how a PK works and checking for UNIQUEness... The "compound" index (date, UUID) is checked for UNIQUEness as the record is inserted. The records are sorted by date+UUID, so all of today's records are clumped together. IF (and this might be a big IF) one day's data fits in the buffer pool (but the entire table does not), then this is what is happening every morning... INSERTs are suddenly adding new records to the "end" of the table because of the new "date". These inserts are occurring randomly within the new date. Blocks in the buffer_pool are being pushed out to disk to make room for the new blocks. But, nicely, what you see is smooth, fast, INSERTs. This is unlike what you saw with PRIMARY KEY(UUID), when many rows had to wait for a disk read before UNIQUEness could be checked. All of today's blocks stay cached, and you don't have to wait for I/O.
But, if you ever get so big that you cannot fit one day's data in the buffer pool, things will start slowing down, first at the end of the day, then it will creep earlier and earlier as the frequency of INSERTs increases.
By the way, PARTITION BY RANGE(date), together with PRIMARY KEY(uuid, date) has somewhat similar characteristics. (Yes I deliberately flipped the PK columns.)
When inserting large amounts of data in a table, keep in mind that the data ends up being physically stored on a disk somewhere. To actually read and write the data from the disk, MySQL (and most other RDBMS) uses something called a clustered index. If you specify a Primary Key or a Unique Index on a table, the column or columns participating in the key/index becomes the clustered index key. This means that on the disk, data is physically stored in the same order as the values in the key column(s).
By utilising the clustered index, the database engine can quickly determine whether a value already exists, without having to scan the whole table. In theory, if a table contains N = 1.000.000 records, the engine on average needs log2(N) = 20 operations to check if a value exists, regardless of how many columns participate in the index. For secondary indexes, a B-tree or a hash table is typically used (search the web for these terms, for a detailed explanation of how they work).
The conclusion of this article is wrong:
"... MySQL is unable to buffer enough data to guarantee a value is
unique and is therefore caused to perform a tremendous amount of
reading for each insert to guarantee uniqueness"
This is incorrect. Checking uniqueness does not really require any additional work, as the engine had to locate the place to insert the new record anyway. What causes the performance slowdown, is the use of UUID's. Remember that UUID's are randomly generated, whenever a new record is inserted. This means that the new record needs to be inserted at a random physical position on the disk, and this causes existing data to be shifted around, to accomodate the new record. If, on the other hand, the index column is a value that increases monotonically (such as an auto-increment INT), new records will always be inserted after the last record, meaning no existing data will ever need to be moved.
In your case, there won't be any performance difference between case 1 and case 2. But you will still run into trouble because of the randomness of the UUID's. It would be much better if you used an auto-incrementing value instead of the UUID. Also, since UUID's are always unique by nature, it really doesn't make much sense to index them with a UNIQUE constraint. Alternatively, if you really must use UUID's, make sure that you have a primary key on your table, that is based on auto-incrementing INT's, to ensure that new records are never randomly inserted on the disk.
This is the very purpose of a UNIQUE constraint:
A UNIQUE index creates a constraint such that all values in the index must be distinct. An error occurs if you try to add a new row [or update an existing row] with a key value that matches [another] existing row.
Earlier in the same manual page, it is stated that
A column list of the form (col1,col2,...) creates a multiple-column index. Index key values are formed by concatenating the values of the given columns.
How this constraint is implemented is not documented, but it must somehow equate to a preliminary SELECT with the values to be inserted/updated. The cost of such a check is often negligible, because, by definition, the fields are indexed (this overhead becomes relevant when dealing with bulk inserts).
The number of columns covered by the index is not meaningful in terms of performance (for example, compared to the number of rows in the table). It does impact the disk space occupied by the index, but this should really not matter in your design decisions.

Better to insert at end of InnoDB primary key, or scattered throughout?

What are the performance characteristics of inserting many small records from many clients into an InnoDB table, where the inserts all happen at the end of the primary key (e.g. with UUIDs where the leading digits are based on a timestamp) vs. scattered throughout the primary key (e.g. with UUIDs where the leading digits aren't based on a timestamp)? Is one preferable to the other?
Appending keys to the end of an index is preferred because the index does not need to be reordered.
When inserting rows in the middle of the primary key index, since actual table data is stored on the same page as the primary keys in InnoDB, page data must be reordered (and relocated if a page fills up). MySQL does leave room for growth in each page, but some reordering and relocating is inevitable.
Page size in InnoDB is 16K, so if inserted rows are small, the effect is less.
Appending rows to the end of an index also requires less locks, though there may be more contention. Try to insert multiple rows in the same statement.
Appending also causes less fragmentation on disk, so sequential pages stay closer together. Disk fragmentation doesn't matter much, however, unless you are querying large amounts of sequential rows or performing table scanning instead of using indexes.
I wouldn't create an incremental surrogate primary key just so you can insert rows in order unless your number of writes (inserts) is higher than your reads (or perhaps because your rows are large and you are experiencing performance issues). If your reads are higher than your writes, being able to use a natural primary key may be a huge performance benefit.
Appending is more performant, but you should choose your method by considering all factors.

Removing a Primary Key (Clustered Index) to increase Insert performance

We've been experiencing SQL timeouts and have identified that bottleneck to be an audit table - all tables in our system contain insert, update and delete triggers which cause a new audit record.
This means that the audit table is the largest and busiest table in the system. Yet data only goes in, and never comes out (under this system) so no select performance is required.
Running a select top 10 returns recently insert records rather than the 'first' records. order by works, of course, but I would expect that a select top should return rows based on their order on the disc - which I'd expect would return the lowest PK values.
It's been suggested that we drop the clustered index, and in fact the primary key (unique constraint) as well. As I mentioned earlier there's no need to select from this table within this system.
What sort of performance hit does a clustered index create on a table? What are the (non-select) ramifications of having an unindexed, unclustered, key-less table? Any other suggestions?
edit
our auditing involves CLR functions and I am now benchmarking with & without PK, indexes, FKs etc to determine the relative cost of the CLR functions & the contraints.
After investigation, the poor performance was not related to the insert statements but instead the CLR function which orchestrated the auditing. After removing the CLR and instead using a straight TSQL proc, performance improved 20-fold.
During the testing I've also determined that the clustered index and identity columns make little or no difference to the insert time, at least relative to any other processing that takes place.
// updating 10k rows in a table with trigger
// using CLR function
PK (identity, clustered)- ~78000ms
No PK, no index - ~81000ms
// using straight TSQL
PK (identity, clustered) - 2174ms
No PK, no index - 2102ms
According to Kimberly Tripp - the Queen of Indexing - having a clustered index on a table actually helps INSERT performance:
The Clustered Index Debate Continued
Inserts are faster in a clustered table (but only in the "right"
clustered table) than compared to a heap. The primary problem here is
that lookups in the IAM/PFS to determine the insert location in a heap
are slower than in a clustered table (where insert location is known,
defined by the clustered key). Inserts are faster when inserted into a
table where order is defined (CL) and where that order is
ever-increasing.
Source: blog post called The Clustered Index Debate Continues....
A great test script and description of this scenarion is available on Tibor Karaszi's blog at SQLblog.com
My numbers don't entirely match his - I see more difference on a batch statement than I do with per-row statements.
With the row count around one million I fairly consistently get a single-row insert loop on clustered index to perform slightly faster than on a non-indexed (clustered taking approximately 97% as long as non-indexed).
Conversely the batch insert (10000 rows) is faster into a non-indexed rather than clustered index (anything from 75%-85% of the clustered insert time).
clustered - loop - 1689
heap - loop - 1713
clustered - one statement - 85
heap - one statement - 62
He describes what's happening on each insert:
Heap: SQL Server need to find where the row should go. For this it
uses one or more IAM pages for the heap, and it cross references these
to one or more PFS pages for the database file(s). IMO, there should
be potential for a noticable overhead here. And even more, with many
users hammering the same table I can imagine blocking (waits) against
the PFS and possibly also IAM pages.
Clustered table: Now, this is dead simple. SQL server navigates the
clustered index tree and find where the row should go. Since this is
an ever increasing index key, each row will go to the end of the table
(linked list).
A table without a key? Not even an auto-incrementing surrogate key? :(
As long as the key is monotonically increasing the index maintenance upon insert should be good -- it's just "added at the end". The "clustered" just means the physical layout of the table follows the index (as the data is part of the index). As long as the index isn't fragmented (see monotonically increasing bit) then the cluster itself/data won't be logically fragmented and this shouldn't be a performance issue. (If there are updates then the clustering is a slightly different story: the record updated may "grow" and cause fragmentation.)
My suggestion is, if that is the chosen route then ... benchmark it with realistic data/load and then decide if such suggestions are warranted. It would be nice to see is this change was decided upon, and why.
Happy coding.
Also, any reliance upon order excepting that from an ORDER BY is flawed by design. It may work now, but it is an implementation detail and may change in subtle ways (as simple as a different query plan). With the auto-increment key, an ORDER BY DESC would always produce the correct result (bear in mind that auto-incremeent IDs can be skipped, but unless "reset" they will always be increasing based on insert order).
My primitive understanding is that even INSERT operations are usually faster with a clustered index, than with a heap. Additionally, disk-space requirements are lower with clustered indexes.
Some interesting tests / scenarios that might shed some light for your particular circumstance: http://technet.microsoft.com/en-us/library/cc917672.aspx.