MariaDB, do new inserts replace deleted rows on disk? - mysql

Can someone point me in the right direction, i can't find any documentation on this behavior.
We know when you delete rows from a table you end up with "holes" which you can defrag with OPTIMIZE. Do new inserts automatically fill in those holes if left alone? Is there a way to force that behavior if not? Using InnoDB tables for revolving logs, deleting old rows and adding new, would the table roll over or continuously consume disk space? Or would a different engine be better suited for this?
Yes i know of table partitions, i want to explore all options first.

Since this is mostly a non-issue, I will assume you are asking for academic reasons?
InnoDB (you should be using that Engine!) stores the data (and each secondary index) in separate B+Trees.
The data's BTree is ordered by the PRIMARY KEY. The various leaf nodes will be filled to different degrees, based on the order of inserts, deletes, updates (that change the row length), temporary transactional locks on rows, etc, etc.
That last one is because one transaction sees effectively an instantaneous snapshot of the data, possibly different than another transaction's view. This implies that multiple copies of a row may coexist.
The buffer_pool holds 16KB blocks. Each block holds a variable number of rows. This number changes with the changing tides. If too adjacent blocks become "too empty", they will be combined.
Totally empty blocks (say, due to lots of deletes) will be put on a free chain for later reuse by Inserts. But note that the disk used by the table will not shrink.
The failure to shrink is usually not a problem -- most tables grow; any shrinkage is soon followed by a new growth spurt.
PARTITIONs are usually not worth using. However, that is the best way to "keep data for only 90 days", then use DROP PARTITION instead of a big, slow DELETE. (That is about the only use for PARTITION.)
If you add up all the bytes in the INTs (4 bytes each) VARCHARs (pick the average length), etc, etc, you will get what seems like a good estimate for the disk space being used. But due to the things discussed above, you need to multiply that number by 2 to 3 to a better estimate of the disk space actually consumed by the table.

Related

Is an auto inc primary key needed here to avoid fragmentation?

I have a table such as follows:
CREATE TABLE Associations (
obj_id int unsigned NOT NULL,
attr_id int unsigned NOT NULL,
assignment Double NOT NULL
PRIMARY KEY (`obj_id`, `attr_id`),
);
Now the insertion order for the rows is/will be random. Would such a definition lead to fragmentation of the table? Should I be adding an auto inc primary key or would that only speed up the insert and would not help the speed of SELECT queries?
What would a better table definition be for random inserts?
Note, that performance wise I am more interested in SELECT than INSERT
(Assuming you are using ENGINE=InnoDB.)
Short answer: Do not fret about fragmentation.
Long answer:
There are two types of "fragmentation" -- Which one bothers you?
BTree blocks becoming less than full.
Blocks becoming scattered around the disk.
If you have an SSD disk, the scattering of blocks around the disk has no impact on performance. For HDD, it matters some, but still not enough to get very worried about.
Fragmentation does not "run away". If two adjacent blocks are seen to be relatively empty, they are combined. Result: The "average" block is about 69% full.
In your particular example, when you want multiple "attributes" for one "object", they will be found "clustered". That is they will be mostly in the same block, hence a bit faster to access. Adding id AUTO_INCREMENT PRIMARY KEY would slow SELECTs/UPDATEs down.
Another reason why an id would slow down SELECTs is that SELECT * FROM t WHERE obj_id=... needs to first find the item in the index, then reach into the data for the other columns. With PRIMARY KEY(obj_id, ...), there is no need for this extra hop. (In some situations, this is a big speedup.)
OPTIMIZE TABLE takes time and blocks access while you are running it.
Even after OPTIMIZE, fragmentation comes back -- for a variety of reasons.
"Fill factor" is virtually useless -- UPDATE and DELETE store extra copies of rows pending COMMIT. This leads to block splits (aka page splits) if fill_factor is too high or sparse blocks if too low. That is, it is too hard to be worth trying to tune.
Fewer indexes means less disk space, etc. You probably need an index on (obj_id, attr_id) whether or not you also have (id). So, why waste space when it does not help?
The one case where OPTIMIZE TABLE can make a noticeable difference is after you delete lots of rows. I discuss several ways to avoid this issue here: http://mysql.rjweb.org/doc.php/deletebig
I guess you use the InnoDB access method. InnoDB stores its data in a so-called clustered index. That is, all the data is stashed away behind the BTREE primary key.
Read this for background.
When you insert a row, you're inserting it into the BTREE structure. To oversimplify, BTREEs are made up of elaborately linked pages accessible in order. That means your data goes into some page somewhere. When you insert data in primary-key order, the data goes into a page at the end of the BTREE. So, when a page fills up, InnoDB just makes another one and puts your data there.
But, when you insert in some other order, often your row must go between other rows in an existing BTREE page. If the page has enough free space, InnoDB can drop your data into it. But, if the page does not have enough space, InnoDB must do a page split. It makes two pages from one, and puts your new row into one of the two.
Doing inserts in some order other than index order causes more page splits. That's why it doesn't perform as well. The classic example is building a table with a UUIDv4 (random) primary key column.
Now, you asked about autoincrementing primary keys. If you have such a key in your InnoDB table, all (or almost all) your INSERTs go into the last page of the clustered index, so you don't get the page split overhead. Cool.
But, if you need an index on some other column or columns that aren't in your INSERT order, you'll get page splits in that secondary index. The entries in secondary indexes are often smaller than the ones in clustered indexes, so you get fewer page splits. But you still get them.
Some DBMSs, but not MySQL, let you declare FILL_PERCENT(50) or something in both clustered and secondary indexes. That's useful for out-of-order loads because your can make your pages start out with less space already used, so you get fewer page splits. (Of course, you use more RAM and SSD with lower fill factors.)
MySQL doesn't have FILL_FACTOR in its data definition language. It does have a global systemwide variable called innodb_fill_factor. It is a percentage number. Its default is 100, which actually means 1/16th of each page is left unused.
If you know you have to do a big out-of-index-order bulk load you can give this
command first to leave 60% of each new page available, to reduce page splits.
SET GLOBAL innodb_fill_factor = 40;
But beware, this is a system-wide setting. It will apply to everything on your MySQL server. You might want to put it back when done to save RAM and SSD space in production.
Finally, OPTIMIZE TABLE tablename; can reorganize tables that have had a lot of page splits to clean them up. (In InnoDB the OPTIMIZE command actually maps to ALTER TABLE tablename FORCE; ANALYZE TABLE tablename;.) It can take a while, so beware.
When you OPTIMIZE, InnoDB remakes the pages to bring their fill percentages near to the number you set in the system variable.
Unless you're doing a really vast bulk load on a vast table, my advice is to not worry about all this fill percentage business. Design your table to match your application and don't look back.
When you're done with any bulk load you can, if you want, do OPTIMIZE TABLE to get rid of any gnarly page splits.
Edit Your choice of primary key is perfect for your queries' WHERE pattern obj_id IN (val, val, val). Don't change that primary key, especially not to an autoincrementing one.
Pro tip It's tempting to try to forsee scaling problems in the early days of an app's lifetime. And there's no harm in it. But in the case of SQL databases, it's really hard to forsee the actual query patterns that will emerge as your app scales up. Fortunately, SQL's designed so you can add and tweak indexes as you go. You don't have to achieve performance perfection on day 1. So, my advice: think about this issue, but avoid overthinking it. With respect, you're starting to overthink it.

Does deleting rows from table effect on db performance?

As a MySQL database user,
I'm working on a script using MySQL database with an auto-increment primary key tables, that users may need to remove (lots of) data rows as mistaken, duplicated, canceled data and so on.
for now, I use a tinyint last col as 'delete' for each table and update the rows to delete=1 instead of deleting the row.
considering the deleted data as not important data,
which way do you suggest to have a better database and performance?
does deleting (maybe lots of) rows every day affect select queries for large tables?
is it better to delete the rows instantly?
or keep the rows using 'delete' col and delete them for example monthly then re-index the data?
I've searched about this but most of the results were based on personal opinions or preferred ones and not referenced or tested data.
PS) Edit:
AND Refering to the question and considering below pic, there's one more point to ask in this topic and I would be grateful if you could guide me.
deleting a row (row 6) while auto increment index was 225, leaded the not-sorted table to put the next inserted row by id=225 at deleted-id=6 place (at least visually!). if deleting action happens lots of times, then primary column and its rows will be completely out of order and messed up.
It should be considered as the good point of database that fill up the deleted spaces or something bad that leads to reducing the performance or none of them and doesn't matter what it's showing in front!?
Thanks.
What percentage of the table is "deleted"?
If it is less than, say, 20%, it would be hard to measure any difference between a soft "deleted=1" and a hard "DELETE FROM tbl". The disk space would probably be the same. A 16KB block would either have soft-deleted rows to ignore, or the block would be not "full".
Let's say 80% of the rows have been deleted. Now there are some noticeable differences.
In the "soft-delete" case, a SELECT will be looking at 5 rows to find only 1. While this sounds terrible, it does not translate into 5 times the effort. There is overhead for fetching a block; if it contains 4 soft-deleted rows and 1 useful row, that overhead is shared. Once a useful row is found, there is overhead to deliver that row to the client, but that applies only to the 1 row.
In the "hard-delete" case, blocks are sometimes coalesced. That is, when two "adjacent" blocks become less than half full, they may be combined into a single block. (Or so the documentation says.) This helps to cut down on the number of blocks that need to be touched. But it does not shrink the disk space -- hard-deleted rows leave space that can be reused; deleted blocks can be reused. Blocks are not returned to the OS.
A "point-query" is a SELECT where you specify exactly the row you want (eg, WHERE id = 123). That will be very fast with either type of delete. The only possible change is if the BTree is a different depth. But even if 80% of the rows are deleted, the BTree is unlikely to change in depth. You need to get to about 99% deleted before the depth changes. (A million rows has a depth of about 3; 100M -> 4.)
"Range queries (eg, WHERE blah BETWEEN ... AND ...) will notice some degradation if most are soft-deleted -- but, as already mentioned, there is a slight degradation in either deletion method.
So, is this my "opinion"? Yes. But it is based on an understanding of how InnoDB tables work. And it is based on "experience" in the sense that I have detected nothing to significantly shake this explanation in about 19 years of using InnoDB.
Further... With hard-delete, you have the option of freeing up the free space with OPTIMIZE TABLE. But I have repeatedly said "don't bother" and elaborated on why.
On the other hand, if you need to delete a big chunk of a table (either one-time or repeatedly), see my blog on efficient techniques: http://mysql.rjweb.org/doc.php/deletebig
(Re: the PS)
SELECT without an ORDER BY -- It is 'fair game' for the query to return the rows in any order it feels like. If you want a certain order, add ORDER BY.
What Engine is being used? MyISAM and InnoDB work differently; neither are predictable with out ORDER BY.
If you wanted the new entry to have id=6, that is a different problem. (And I will probably argue against designing the ids like that.)
The simple answer is no. Because DBMS systems are already designed to make changes at any time but system performance is important. Sometimes it's will affect a little bit. But no need to care it

mysql innodb optimize - fails after hours

services.profile optimize note Table does not support optimize, doing recreate + analyze instead
services.profile optimize error Creating index 'PRIMARY' required more than 'innodb_online_alter_log_max_size' bytes of modification log. Please try again.
services.profile optimize status Operation failed
The table is 300GB large with indexes.
The variable mysql complains after working for 3 HOURS:
innodb_online_alter_log_max_size 5500000000
The table is not being written to more than a few MB in that time.
What is the problem of innodb/mysql that a simple OPTIMIZE of a 300GB table fails after 3 hours of "work" because a buffer of 5.5GB ran full ??
Don't use OPTIMIZE TABLE on InnoDB tables -- it provides little, if any, benefit.
InnoDB suffers from some fragmentation, but not enough to be worth the downtime of defragmenting. And the data will quickly become fragmented again.
The main cause of fragmentation is "block splits" For example, when you add a row to a 16KB block which is 'full', the block is split into two. The, say, 89 rows plus 1 row in one block become, say, 45 rows in each of two blocks. As you continue to insert rows, these (and other) blocks gradually fill up until they split again. After a lot of such inserts, the table becomes about 69% full.
So, you say, won't that slow things down a lot? No. Point queries drill down a BTree -- a relatively constant time. Range scans hit more blocks, but the number of rows scanned does not change. Etc.
Also, InnoDB will combine two adjacent blocks that are "too empty", thereby avoiding (usually) some worst case scenarios. If you DELETE lots of rows, blocks may get rather empty. This "combining" keeps the fragmentation under control.
If you "fragmentation" refers to the blocks being scattered around the disk, well, that is not cured by OPTIMIZE TABLE. And any block split will use a new block from 'anywhere'.
UPDATE is somewhere between INSERT (of the text in the row grows) and DELETE (if the data shrinks).
(There are many more details that I left out.)

Insertion performance degrade with large index (MYSQL)

Recently, i found that one of the server have high I/O traffic on disk. The high I/O due to the writing of index on certain table after some diagnostics. I have done several evaluation test and found that mysql take high number of write when inserting records to the table which have a large index.
The Data type of indexed columns is varchar(15) and varchar(17) ,both are non-unique index
there is only 80 writes on disk if i load 20000 records to the table which has 10000 records whereas there are 1700 writes on disk when table grow to 20 millions (which got about 1 millions distinct values on indexed columns)
even the number of records being inserted is the same.
Engine is MyISAM.
Increasing the size of the indexes also increasing number of write on disk per insert.
Is it the BTREE index behavior and how can i solve this issue?
Use InnoDB instead of MyISAM.
InnoDB helps by buffering writes to secondary indexes, merging them if possible, and delaying the expensive I/O. You can read more about this feature in the MySQL Manual under Controlling InnoDB Change Buffering.
Re your comment:
Inserting a new value into a B-Tree can be expensive. If there's no room at the leaf level, the insertion may cause a cascading effect of splitting the non-leaf nodes of the tree, potentially all the way up to the top of the tree. That can cause a lot of I/O, since different nodes of the tree may be stored far apart from one another on disk.
Other mitigating strategies are to make the table smaller, by moving less-used data to another table. Or by using MySQL table partitioning to make the one logical table comprised of many individual physical tables. Each such sub-table must have the same indexes, but then each individual index will be smaller.
There's an animated example here:
http://www.bluerwhite.org/btree/
Look at the example "Inserting Key 33 into a B-Tree (w/ Split)" where it shows the steps of inserting a value into a B-tree node that overfills it, and what the B-tree does in response.
Now imagine that the example illustration only shows the bottom part of a B-tree that is much deeper (as would be the case if your index B-tree has millions of entries), and filling the parent node can itself be an overflow, and force the splitting operation to continue up the the higher level in the tree. This can continue all the way to the very top of the tree if all the ancestor nodes to the top of the tree were already filled.

MySQL: OPTIMIZE TABLE needed on table with fixed columns?

I have a weekly script that moves data from our live database and puts it into our archive database, then deletes the data it just archived from the live database. Since it's a decent size delete (about 10% of the table gets trimmed), I figured I should be running OPTIMIZE TABLE after this delete.
However, I'm reading this from the mysql documentation and I don't know how to interpret it:
http://dev.mysql.com/doc/refman/5.1/en/optimize-table.html
"OPTIMIZE TABLE should be used if you have deleted a large part of a table or if you have made many changes to a table with variable-length rows (tables that have VARCHAR, VARBINARY, BLOB, or TEXT columns). Deleted rows are maintained in a linked list and subsequent INSERT operations reuse old row positions. You can use OPTIMIZE TABLE to reclaim the unused space and to defragment the data file."
The first sentence is ambiguous to me. Does it mean you should run it if:
A) you have deleted a large part of a table with variable-length rows or if you have made many changes to a table with variable-length rows
OR
B) you have deleted a large part of ANY table or if you have made many changes to a table with variable-length rows
Does that make sense? So if my table has no VAR columns, do I need to run it still?
While we're on the subject - is there any indicator that tells me that a table is ripe for an OPTIMIZE call?
Also, I read this http://www.xaprb.com/blog/2010/02/07/how-often-should-you-use-optimize-table/ that says running OPTIMIZE table only is useful for the primary key. If most of my selects are from other indices, am I just wasting effort on tables that have a surrogate key?
Thanks so much!
In your scenario, I do not believe that regularly optimizing the table will make an appreciable difference.
First things first, your second interpretation (B) of the documentation is correct - "if you have deleted a large part of ANY table OR if you have made many changes to a table with variable-length rows."
If your table has no VAR columns, each record, regardless of the data it contains, takes up the exact same amount of space in the table. If a record is deleted from the table, and the DB chooses to reuse the exact area the previous record was stored, it can do so without wasting any space or fragmenting your data.
As far as whether OPTIMIZE only improves performance on a query that utilizes the primary key index, that answer would almost certainly vary based on what storage engine is in use, and I'm afraid I wouldn't be able to answer that.
However, speaking of storage engines, if you do end up using OPTIMIZE, be aware that it doesn't like to run on InnoDB tables, so the command maps to ALTER and rebuilds the table, which might be a more expensive operation. Either way, the table locks during the optimizations, so be very careful about when you run it.
There are so many differences between MyISAM and InnoDB, I am splitting this answer in two:
MyISAM
FIXED has some meaning for MyISAM.
"Deleted rows are maintained in a linked list and subsequent INSERT operations reuse old row positions" applies to MyISAM, not InnoDB. Hence, for MyISAM tables with a lot of churn, OPTIMIZE can be beneficial.
In MyISAM, VAR plus DELETE/UPDATE leads to fragmentation.
Because of the linked list and VAR, a single row can be fragmented across the data file (.MYD). (Otherwise, a MyISAM row is contiguous in the data file.)
InnoDB
FIXED has no meaning for InnoDB tables.
For VAR in InnoDB, there are "block splits", not a linked list.
In a BTree, block splits stabilize at and average 69% full. So, with InnoDB, almost any abuse will leave the table not too bloated. That is, DELETE/UPDATE (with or without VAR) leads to the more limited BTree 'fragmentation'.
In InnoDB, emptied blocks (16KB each) are put on a "free list" for reuse; they are not given back to the OS.
Data in InnoDB is ordered by the PRIMARY KEY, so deleting a row in one part of the table does not provide space for a new row in another part of the table. But, when a block is freed up, it can be used elsewhere.
Two adjacent blocks that are half empty will be coalesced, thereby freeing up a block.
Both
If you are removing "old" data (your 10%), then PARTITIONing is a much better way to do it. See my blog. It involves DROP PARTITION, which is instantaneous and gives space back to the OS, plus REORGANIZE PARTITION, which can be instantaneous.
OPTIMIZE TABLE is almost never worth doing.