mysql innodb optimize - fails after hours - mysql

services.profile optimize note Table does not support optimize, doing recreate + analyze instead
services.profile optimize error Creating index 'PRIMARY' required more than 'innodb_online_alter_log_max_size' bytes of modification log. Please try again.
services.profile optimize status Operation failed
The table is 300GB large with indexes.
The variable mysql complains after working for 3 HOURS:
innodb_online_alter_log_max_size 5500000000
The table is not being written to more than a few MB in that time.
What is the problem of innodb/mysql that a simple OPTIMIZE of a 300GB table fails after 3 hours of "work" because a buffer of 5.5GB ran full ??

Don't use OPTIMIZE TABLE on InnoDB tables -- it provides little, if any, benefit.
InnoDB suffers from some fragmentation, but not enough to be worth the downtime of defragmenting. And the data will quickly become fragmented again.
The main cause of fragmentation is "block splits" For example, when you add a row to a 16KB block which is 'full', the block is split into two. The, say, 89 rows plus 1 row in one block become, say, 45 rows in each of two blocks. As you continue to insert rows, these (and other) blocks gradually fill up until they split again. After a lot of such inserts, the table becomes about 69% full.
So, you say, won't that slow things down a lot? No. Point queries drill down a BTree -- a relatively constant time. Range scans hit more blocks, but the number of rows scanned does not change. Etc.
Also, InnoDB will combine two adjacent blocks that are "too empty", thereby avoiding (usually) some worst case scenarios. If you DELETE lots of rows, blocks may get rather empty. This "combining" keeps the fragmentation under control.
If you "fragmentation" refers to the blocks being scattered around the disk, well, that is not cured by OPTIMIZE TABLE. And any block split will use a new block from 'anywhere'.
UPDATE is somewhere between INSERT (of the text in the row grows) and DELETE (if the data shrinks).
(There are many more details that I left out.)

Related

MariaDB, do new inserts replace deleted rows on disk?

Can someone point me in the right direction, i can't find any documentation on this behavior.
We know when you delete rows from a table you end up with "holes" which you can defrag with OPTIMIZE. Do new inserts automatically fill in those holes if left alone? Is there a way to force that behavior if not? Using InnoDB tables for revolving logs, deleting old rows and adding new, would the table roll over or continuously consume disk space? Or would a different engine be better suited for this?
Yes i know of table partitions, i want to explore all options first.
Since this is mostly a non-issue, I will assume you are asking for academic reasons?
InnoDB (you should be using that Engine!) stores the data (and each secondary index) in separate B+Trees.
The data's BTree is ordered by the PRIMARY KEY. The various leaf nodes will be filled to different degrees, based on the order of inserts, deletes, updates (that change the row length), temporary transactional locks on rows, etc, etc.
That last one is because one transaction sees effectively an instantaneous snapshot of the data, possibly different than another transaction's view. This implies that multiple copies of a row may coexist.
The buffer_pool holds 16KB blocks. Each block holds a variable number of rows. This number changes with the changing tides. If too adjacent blocks become "too empty", they will be combined.
Totally empty blocks (say, due to lots of deletes) will be put on a free chain for later reuse by Inserts. But note that the disk used by the table will not shrink.
The failure to shrink is usually not a problem -- most tables grow; any shrinkage is soon followed by a new growth spurt.
PARTITIONs are usually not worth using. However, that is the best way to "keep data for only 90 days", then use DROP PARTITION instead of a big, slow DELETE. (That is about the only use for PARTITION.)
If you add up all the bytes in the INTs (4 bytes each) VARCHARs (pick the average length), etc, etc, you will get what seems like a good estimate for the disk space being used. But due to the things discussed above, you need to multiply that number by 2 to 3 to a better estimate of the disk space actually consumed by the table.

In mysql(innodb),how does the big table affect buffer(memory)?

I know if a table is too big, the indexes can hardly be fit into the buffer_pool,
so using index may result in a large number of random disk IO. So the full table scan,
in general, is probably much faster than index scan even though it only reads about %1 rows.
What I am confused about is :
[0] If there are a big table( 30 millions rows),and many small tables(each table can be fit into memory(buffer)),
will the big table also affect query about small tables ?
My logic is <======>
the buffer is shared by the whole database, so the big table will take most of buffer.
So the indexes of small tables can also hardly be fit into buffer(or it's often
removed from the buffer). Then the above conclusion(full table scan vs index scan) can be applied to this case .
[1] When the big table are partitioned into may small tables(in just one machine), the situation of buffer should keep identical.
So such partition cannot solve this problem(full table scan vs index scan), right? so the "big table" should not mean "one big table", but the "huge database or the sum of data is large"
To sum up, is my inclusion right? if wrong, why? Please give me a hint. Thanks very much.
The buffer_pool is shared across all tables, data and index. But the rest of what you said is needs to focus on "blocks" instead of "tables".
Caching is performed on a block basis. A block (in InnoDB) is 16KB. Most of the innodb_buffer_pool_size is dedicated to data and index blocks.
The cache is run (approximately) as LRU (Least Recently Used) -- That is, the least recently used blocks are tossed from the cache when other blocks are needed.
No, a table or index is not "entirely" loaded into the cache. Instead, the desired blocks are loaded (and purged) when needed.
If all the data and indexes fit into the cache, then (eventually) all the blocks will 'live' there.
If the data plus indexes are too big, then blocks will come and go as needed. Usually this is nearly as good as having them all loaded. For example, if you are usually using "recent" records, then the blocks containing them will 'stay' in the cache; meanwhile "old" blocks will get bumped out.
If you are using UUIDs (GUIDs), performance can get really bad -- this is because of the random nature of such indexed values.
Full table scans (and full index scans) should be avoided whether or not things are too big to fit in cache. They are costly, and they can usually be avoided by proper indexing and/or query formulation.
When you do a full table scan on a table that is bigger than the cache, something's gotta give. You will have to do some I/O, and some blocks will be bumped out of cache. However, there is a technique built in that prevents blindly purging the entire cache for an occasional table scan. For further discussion, research innodb_old_blocks_pct. (No, I don't recommend changing it from the default 37%.)
What do you mean by partitioning a table? If you mean the builtin PARTITION mechanism, then so what? If you scan a table you are scanning all the partitions. Same number blocks; same impact on the cache.
I have dealt with sets of tables that exceed the buffer_pool by a factor of 10 or more. I can discuss performance techniques, but I need a specific SHOW CREATE TABLE (with or without PARTITIONs) and some of the naughty queries (such as table scans).
The Optimizer chooses between doing a table scan and using an index based on a variety of statistics, etc. A Rule of Thumb is that, if more than 20% of the rows need to be touched, it will do a table scan instead of bouncing between the index and the data. (Note: the cutoff is much higher than the 1% you mentioned.)
An Index is structured as a BTree in 16KB blocks, so it is very efficient to start in the middle and scan a range. For example: INDEX(last_name) for WHERE last_name LIKE 'J%' would probably do a "range scan" of 10% of the index, even if that involved bouncing over to the table a lot.

Low cardinality column index VS table overheads

I have a table which holds 70 thousand rows and it is planned to slowly grow to about 140 thousands within several months.
I have 4 columns with low cardinality that contain 0/1 values as in FALSE/TRUE. I have table overheads (after optimization) of 28 MB with table size of 6 MB. I have added 4 separate simple indexes to those 4 columns. My overheads dropped to 20 MB.
I understand that indexing low cardinality column (where there are many rows, but few distinct values) has almost no effect on performance of queries, yet my overheads dropped. And overheads increase without these indexes. Should I keep lower overheads or should I rather keep potentially pointless indexes? Which affect performance the most?
P.S. Table is mainly read with variable load ranging from thousands of queries per minute to hundreds of queries per day. Writes are mainly updates of these 4 boolean columns or one timestamp column.
Indices aren't pointless when you approach table sizes that have tens of millions of rows- and you will only see marginal improvements in query performance when dealing with the table size you are dealing with now.
You're better off leaving the indices the way they are, and reconsider your DB schema. A query shouldn't use 20+ MB of memory, and its performance will only snowball into much bigger problem as the DB grows.
That said, jumping from 70k rows to 150k rows is not a huge leap in your typical mysql database. If performance is already a concern, there is already a much larger problem at play here. If you are storing large blobs in your DB, for example, you may be better off storing your data in a file, and save its location as a varchar field in your table.
One other thing to consider, if you absolutely have to keep your DB schema exactly the way it is, is to consider partitioning your data. You can typically partition your table by ID's or datetime, and see a considerable improvement in performance.

Insertion performance degrade with large index (MYSQL)

Recently, i found that one of the server have high I/O traffic on disk. The high I/O due to the writing of index on certain table after some diagnostics. I have done several evaluation test and found that mysql take high number of write when inserting records to the table which have a large index.
The Data type of indexed columns is varchar(15) and varchar(17) ,both are non-unique index
there is only 80 writes on disk if i load 20000 records to the table which has 10000 records whereas there are 1700 writes on disk when table grow to 20 millions (which got about 1 millions distinct values on indexed columns)
even the number of records being inserted is the same.
Engine is MyISAM.
Increasing the size of the indexes also increasing number of write on disk per insert.
Is it the BTREE index behavior and how can i solve this issue?
Use InnoDB instead of MyISAM.
InnoDB helps by buffering writes to secondary indexes, merging them if possible, and delaying the expensive I/O. You can read more about this feature in the MySQL Manual under Controlling InnoDB Change Buffering.
Re your comment:
Inserting a new value into a B-Tree can be expensive. If there's no room at the leaf level, the insertion may cause a cascading effect of splitting the non-leaf nodes of the tree, potentially all the way up to the top of the tree. That can cause a lot of I/O, since different nodes of the tree may be stored far apart from one another on disk.
Other mitigating strategies are to make the table smaller, by moving less-used data to another table. Or by using MySQL table partitioning to make the one logical table comprised of many individual physical tables. Each such sub-table must have the same indexes, but then each individual index will be smaller.
There's an animated example here:
http://www.bluerwhite.org/btree/
Look at the example "Inserting Key 33 into a B-Tree (w/ Split)" where it shows the steps of inserting a value into a B-tree node that overfills it, and what the B-tree does in response.
Now imagine that the example illustration only shows the bottom part of a B-tree that is much deeper (as would be the case if your index B-tree has millions of entries), and filling the parent node can itself be an overflow, and force the splitting operation to continue up the the higher level in the tree. This can continue all the way to the very top of the tree if all the ancestor nodes to the top of the tree were already filled.

MySQL: OPTIMIZE TABLE needed on table with fixed columns?

I have a weekly script that moves data from our live database and puts it into our archive database, then deletes the data it just archived from the live database. Since it's a decent size delete (about 10% of the table gets trimmed), I figured I should be running OPTIMIZE TABLE after this delete.
However, I'm reading this from the mysql documentation and I don't know how to interpret it:
http://dev.mysql.com/doc/refman/5.1/en/optimize-table.html
"OPTIMIZE TABLE should be used if you have deleted a large part of a table or if you have made many changes to a table with variable-length rows (tables that have VARCHAR, VARBINARY, BLOB, or TEXT columns). Deleted rows are maintained in a linked list and subsequent INSERT operations reuse old row positions. You can use OPTIMIZE TABLE to reclaim the unused space and to defragment the data file."
The first sentence is ambiguous to me. Does it mean you should run it if:
A) you have deleted a large part of a table with variable-length rows or if you have made many changes to a table with variable-length rows
OR
B) you have deleted a large part of ANY table or if you have made many changes to a table with variable-length rows
Does that make sense? So if my table has no VAR columns, do I need to run it still?
While we're on the subject - is there any indicator that tells me that a table is ripe for an OPTIMIZE call?
Also, I read this http://www.xaprb.com/blog/2010/02/07/how-often-should-you-use-optimize-table/ that says running OPTIMIZE table only is useful for the primary key. If most of my selects are from other indices, am I just wasting effort on tables that have a surrogate key?
Thanks so much!
In your scenario, I do not believe that regularly optimizing the table will make an appreciable difference.
First things first, your second interpretation (B) of the documentation is correct - "if you have deleted a large part of ANY table OR if you have made many changes to a table with variable-length rows."
If your table has no VAR columns, each record, regardless of the data it contains, takes up the exact same amount of space in the table. If a record is deleted from the table, and the DB chooses to reuse the exact area the previous record was stored, it can do so without wasting any space or fragmenting your data.
As far as whether OPTIMIZE only improves performance on a query that utilizes the primary key index, that answer would almost certainly vary based on what storage engine is in use, and I'm afraid I wouldn't be able to answer that.
However, speaking of storage engines, if you do end up using OPTIMIZE, be aware that it doesn't like to run on InnoDB tables, so the command maps to ALTER and rebuilds the table, which might be a more expensive operation. Either way, the table locks during the optimizations, so be very careful about when you run it.
There are so many differences between MyISAM and InnoDB, I am splitting this answer in two:
MyISAM
FIXED has some meaning for MyISAM.
"Deleted rows are maintained in a linked list and subsequent INSERT operations reuse old row positions" applies to MyISAM, not InnoDB. Hence, for MyISAM tables with a lot of churn, OPTIMIZE can be beneficial.
In MyISAM, VAR plus DELETE/UPDATE leads to fragmentation.
Because of the linked list and VAR, a single row can be fragmented across the data file (.MYD). (Otherwise, a MyISAM row is contiguous in the data file.)
InnoDB
FIXED has no meaning for InnoDB tables.
For VAR in InnoDB, there are "block splits", not a linked list.
In a BTree, block splits stabilize at and average 69% full. So, with InnoDB, almost any abuse will leave the table not too bloated. That is, DELETE/UPDATE (with or without VAR) leads to the more limited BTree 'fragmentation'.
In InnoDB, emptied blocks (16KB each) are put on a "free list" for reuse; they are not given back to the OS.
Data in InnoDB is ordered by the PRIMARY KEY, so deleting a row in one part of the table does not provide space for a new row in another part of the table. But, when a block is freed up, it can be used elsewhere.
Two adjacent blocks that are half empty will be coalesced, thereby freeing up a block.
Both
If you are removing "old" data (your 10%), then PARTITIONing is a much better way to do it. See my blog. It involves DROP PARTITION, which is instantaneous and gives space back to the OS, plus REORGANIZE PARTITION, which can be instantaneous.
OPTIMIZE TABLE is almost never worth doing.