Postgres usage of btree indexes vs MySQL B+trees - mysql

We are in the process of migrating from MySQL to PGSQL and we have a 100 million row table.
When I was trying to ascertain how much space both systems use, I found much less difference for tables, but found huge differences for indexes.
MySQL indexes were occupying more size than the table data itself and postgres was using considerably lesser sizes.
When digging through for the reason, I found that MySQL uses B+ trees to store the indexes and postgres uses B-trees.
MySQL usage of indexes was a little different, it stores the data along with the indexes (due to which the increased size), but postgres doesn't.
Now the questions:
Comparing B-tree and B+ trees on database speak, it is better to use B+trees since they are better for range queries O(m) + O(logN) - where m in the range and lookup is logarithmic in B+trees?
Now in B-trees the lookup is logarithmic for range queries it shoots up to O(N) since it does not have the linked list underlying structure for the data nodes. With that said, why does postgres uses B-trees? Does it perform well for range queries (it does, but how does it handle internally with B-trees)?
The above question is from a postgres point of view, but from a MySQL perspective, why does it use more storage than postgres, what is the performance benefit of using B+trees in reality?
I could have missed/misunderstood many things, so please feel free to correct my understanding here.
Edit for answering Rick James questions
I am using InnoDB engine for MySQL
I built the index after populating the data - same way I did in postgres
The indexes are not UNIQUE indexes, just normal indexes
There were no random inserts, I used csv loading in both postgres and MySQL and only after this I created the indexes.
Postgres block size for both indexes and data is 8KB, I am not sure for MySQL, but I didn't change it, so it must be the defaults.
I would not call the rows big, they have around 4 text fields with 200 characters long, 4 decimal fields and 2 bigint fields - 19 numbers long.
The P.K is a bigint column with 19 numbers,I am not sure if this is bulky? On what scale should be differentiate bulky vs non-bulky?
The MySQL table size was 600 MB and Postgres was around 310 MB both including indexes - this amounts to 48% bigger size if my math is right.But is there a way that I can measure the index size alone in MySQL excluding the table size? That can lead to better numbers I guess.
Machine info : I had enough RAM - 256GB to fit all the tables and indexes together, but I don't think we need to traverse this route at all, I didn't see any noticeable performance difference in both of them.
Additional Questions
When we say fragmentation occurs ? Is there a way to do de-fragmentation so that we can say that beyond this, there is nothing to be done.I am using Cent OS by the way.
Is there a way to measure index size along in MySQL, ignoring the primary key as it is clustered, so that we can actually see what type is occupying more size if any.

First, and foremost, if you are not using InnoDB, close this question, rebuild with InnoDB, then see if you need to re-open the question. MyISAM is not preferred and should not be discussed.
How did you build the indexes in MySQL? There are several ways to explicitly or implicitly build indexes; they lead to better or worse packing.
MySQL: Data and Indexes are stored in B+Trees composed of 16KB blocks.
MySQL: UNIQUE indexes (including the PRIMARY KEY) must be updated as you insert rows. So, a UNIQUE index will necessarily have a lot of block splits, etc.
MySQL: The PRIMARY KEY is clustered with the data, so it effectively takes zero space. If you load the data in PK order, then the block fragmentation is minimal.
Non-UNIQUE secondary keys may be built on the fly, which leads to some fragmentation. Or they can be constructed after the table is loaded; this leads to denser packing.
Secondary keys (UNIQUE or not) implicitly include the PRIMARY KEY in them. If the PK is "large" then the secondary keys are bulky. What is your PK? Is this the 'answer'?
In theory, totally random inserts into a BTree lead to a the blocks being about 69% full. Maybe this is the answer. Is MySQL 45% bigger (1/69%)?
With 100M rows, probably many operations are I/O-bound because you don't have enough RAM to cache all the data and/or index blocks needed. If everything is cached, then B-Tree versus B+Tree won't make much difference. Let's analyze what needs to happen for a range query when things are not fully cached.
With either type of Tree, the operation starts with a drill-down in the Tree. For MySQL, 100M rows will have a B+Tree of about 4 levels deep. The 3 non-leaf nodes (again 16KB blocks) will be cached (if they weren't already) and be reused. Even for Postgres, this caching probably occurs. (I don't know Postgres.) Then the range scan starts. With MySQL it walks through the rest of the block. (Rule of Thumb: 100 rows in a block.) Ditto for Postgres?
At the end of the block something different has to happen. For MySQL, there is a link to the next block. That block (with 100 more rows) is fetched from disk (if not cached). For a B-Tree the non-leaf nodes need to be traversed again. 2, probably 3 levels are still cached. I would expect the need for another non-leaf node to be fetched from disk only 1/10K rows. (10K = 100*100) That is, Postgres might hit the disk 1% more often than MySQL, even on a "cold" system.
On the other hand, if the rows are so fat that only 1 or 2 can fit in a 16K block, the "100" I kept using is more like "2", and the 1% becomes maybe 50%. That is, if you have big rows this could be the "answer". Is it?
What is the block size in Postgres? Note that many of the computations above depend on the relative size between the block and the data. Could this be an answer?
Conclusion: I've given you 4 possible answers. Would you like to augment the question to confirm or refute that each of these apply? (Existence of secondary indexes, large PK, inefficient building of secondary indexes, large rows, block size, ...)
Addenda about PRIMARY KEY
For InnoDB, another thing to note... It is best to have a PRIMARY KEY in the definition of the table before loading the data. It is also best to sort the data in PK order before LOAD DATA. Without specifying any PRIMARY KEY or UNIQUE key, InnoDB builds a hidden 6-byte PK; this is usually sub-optimal.

At databases you have often queries who delivers some data ranges like id's from 100 to 200.
In this case
B-Tree needs to follow the path from the root to the leafs for every single entry to get the data-pointer.
B+-Trees can 'walk' through the leafs and has to follow the path to the leafs only the first time (i.e. for the id 100)
This is because B+-Trees stores only the data (or data-pointer) in the leafs and the leafs are linked so that you can perform a rapid in-order-traversal.
B+-Tree
Another point is:
At B+Trees the inner nodes stores only pointer to other nodes without any data-pointer, so you have more space for pointers and you need less IO-Operations and you can store more node-pointers at a memory-page.
So for range-queries B+-Trees are the optimum data-strucure. For single selections B-Trees might be better (causes of the depth/size of the tree), cause the data-pointer are located also inside the tree.

MySQL and PostgreSQL aren't really comparable here Innodb uses an index to store table data (and secondary indexes just point at the pkey). This is great for single row pkey lookups and with B+ trees, do ok with range queries on the pkey field, but have performance drawbacks for everything else.
PostgreSQL uses heap tables and puts indexes as separate. It supports a number of different indexing algorithms. Depending on your range query, a btree index may not help you and you may need a GiST Index instead. Similarly GIN indexes work well with member lookups (for arrays, fts etc).
I think btree is used because it excels at the simple use case: what roes contain the following data? This becomes a building block of GIN for example.
But it isn't true that PostgreSQL cannot use B+ trees. GiST is built on B+ Tree indexes in a generalized format. So PostgreSQL gives you the option to use B+ trees where they come in handy.

Related

MySQL indexing and data access time complexity

in MySQL i read that data access time complexity when we have an indexing over a specific data column is log(n) because of using a BTree, do there any scenario that data access time became more that log(n) for example O(n), because when we insert data in sorted manner into a BTree the tree grows in one side and data access complexity grows to O(n) do they have any policy on inserting data into this indexing BTree ?? thanks for answering
Some things to note:
Disk accesses are far more important (to performance) than O().
A "Rule of Thumb": A node has 100 sub-nodes (or leaf elements). Hence...
In a typical InnoDB BTree (data or index), a million-row table will be only about 3 levels deep. For a trillion rows, about 6 levels. That is the primary place where "log n" comes into play.
BTrees, if augmented bottom-up, stayed balanced.
In MySQL, don't worry about the BTrees; there are worse things to deal with -- indexing, formulation of queries, etc.
InnoDB uses B+Trees, making index-scans reasonably efficient by not having to drill-down the tree each time it exhausts the elements of a node.
Wikipedia is another useful reference.

Mysql/mariadb innodb: does row size affect complex query performance?

I have InnoDB table with millions rows (statistics events) on my MariaDB 10 server and each row historically has a long user-id char(44) field (used as non-unique key) along with other 30 int/varchar fields (row size is about 240 bytes). My system can make cohort analysis, funnels, event segmentation and other common statistics - so some queries are very complex with many joins. Now I have an opportunity to add 4-byte int field and use it as user-id and as main non-unique key for all queries. But I need to keep old symbolic char(44) user-id in this table because of realization details - some data sources are not mine and send events only with symbolic user-ids.
So the question is: will - in general - keeping or removing this char(44) field affect performance of complex queries? It will just stay like other char fields, and it will not be used as a key in queries anymore. I'd prefer not to split the table because there are lot of code depend on its structure.
Thanks!
Tested Aria, and found out that it is ~1.5x slower than InnoDB for my purposes, even on simple joins. InnoDB with "redundant" row format works even faster. So - no, Aria is not a compromise, it is even slower than myISAM. I suppose InnoDB is XtraDB in Maria10, this explains the speed.
Also did some testing on self join query and found that leaving or removing char(44) field has no affect on query performance if we're not using this field.
And moving from char(44) key to int makes queries 2x faster!
Switching to a shorter integer key will help query performance a little bit. The indexing overhead of fixed length character columns isn't hideous.
Stuffing more RAM and/or some SSD disks into your database server will most likely cost less than refactoring your program, as you have mentioned.
What will really help your query performance is the creation of appropriate compound covering indexes. If you have queries that can be satisfied just from such an index, things will get faster.
For example, if you do a lot of
SELECT integer_user_id
FROM table
WHERE character_user_id = 'constant'
then a compound index on (character_user_id) will make this query very fast.
Be careful when you add lots of indexes: there's a penalty to pay upon INSERT or UPDATE in tables with many indexes.

MySql partitoning vs indexing performance

In MySql InnoDB, is there an performance advantage of partitioning the table compared to simply using an index?
Common considerations:
Is an Index the Best Solution?
An index isn’t always the right tool. At a high level, keep in mind that indexes are most
effective when they help the storage engine find rows without adding more work than
they avoid. For very small tables, it is often more effective to simply read all the rows
in the table. For medium to large tables, indexes can be very effective. For enormous
tables, the overhead of indexing, as well as the work required to actually use the indexes,
can start to add up. In such cases you might need to choose a technique that identifies
groups of rows that are interesting to the query, instead of individual rows. You can
use partitioning for this purpose.
If you have lots of tables, it can also make sense to create a metadata table to store some
characteristics of interest for your queries. For example, if you execute queries that
perform aggregations over rows in a multitenant application whose data is partitioned
into many tables, you can record which users of the system are actually stored in each
table, thus letting you simply ignore tables that don’t have information about those
users. These tactics are usually useful only at extremely large scales. In fact, this is a
crude approximation of what Infobright does. At the scale of terabytes, locating individual rows doesn’t make sense; indexes are replaced by per-block metadata.
One thing is sure: you can’t scan the whole table every time you want to query it,
because it’s too big. And you don’t want to use an index because of the maintenance
cost and space consumption. Depending on the index, you could get a lot of fragmentation and poorly clustered data, which would cause death by a thousand cuts through
random I/O. You can sometimes work around this for one or two indexes, but rarely
for more. Only two workable options remain: your query must be a sequential scan
over a portion of the table, or the desired portion of the table and index must fit entirely
in memory.
It’s worth restating this: at very large sizes, B-Tree indexes don’t work. Unless the index
covers the query completely, the server needs to look up the full rows in the table, and
that causes random I/O a row at a time over a very large space, which will just kill query
response times. The cost of maintaining the index (disk space, I/O operations) is also
very high. Systems such as Infobright acknowledge this and throw B-Tree indexes out
entirely, opting for something coarser-grained but less costly at scale, such as per-block
metadata over large blocks of data.
This is what partitioning can accomplish, too. The key is to think about partitioning
as a crude form of indexing that has very low overhead and gets you in the neighborhood
of the data you want. From there, you can either scan the neighborhood sequentially,
or fit the neighborhood in memory and index it. Partitioning has low overhead because
there is no data structure that points to rows and must be updated—partitioning
doesn’t identify data at the precision of rows, and has no data structure to speak of.
Instead, it has an equation that says which partitions can contain which categories of
rows.
(many thanks to High Performance MySQL great book)
99% of cases I have looked at do not benefit from PARTITIONing as much as from INDEXing.
My Rules of Thumb for using Partitioning are in http://mysql.rjweb.org/doc.php/partitionmaint . Also, that lists the only 4 use cases where partitioning improves performance.
OK, I can't say "exactly" 99%, but it is very close to that. I do believe strongly in the "4" -- I have been searching since partitioning was added to MySQL many years ago.
For Data Warehousing, the usual performance solution is to create and maintain "Summary tables". This works nicely for 'most' DW applications.
"Very large BTrees don't work"? Bull. A million-row index will have a BTree depth of about 3. A trillion rows -- about 6. Where's the "won't work"? A "point query" on a trillion row table will touch twice as many nodes in the BTree, and more of them are unlikely to be cached. But it "will work".
Infobright, with its "columnar storage", has its niche. TokuDB, with its "fractal indexing", has its niche. Neither one can say "we are better than BTrees most of the time". (Both those engines get part of their speed by compression.)
Bottom Line: Use an index. Probably a "composite" index. (More indexing tips: http://mysql.rjweb.org/doc.php/index_cookbook_mysql )

Database memory and disk work assignation

I was reading ebook chapter about indexes, and indexing strategies, many of these aspects I already know, but I stucked on clustered indexes in InnoDB, here is the quote:
Clustering gives the largest improvement for I/O-bound workloads. If
the data fits in memory the order in which it’s accessed doesn’t
really matter, so clustering doesn’t give much benefit.
I belive that this is truth, but how am I supposed to guess if the data would fit in memory? How the database decide when to process the data in-memory, and when not?
Let's say we have a table Emp with columns ID, Name, and Phone filled with 100 000 records
If, one example, I will put the clustered index on the ID column, and perform this query
SELECT * FROM Employee;
How do I know if this will use a benefits from clustered index?
It's somehow relative to this thread
Difference between In memory databases and disk memory database
but yet I am not sure how the database will behave
Your example might be 20MB.
"In memory" really means "in the InnoDB buffer_pool", whose size is controlled by innodb_buffer_pool_size, which should be set to about 70% of available RAM.
If your query hits the disk instead of finding everything cached in the buffer_pool, it will run (this is just a Rule of Thumb) 10 times as slow.
What you are saying on "clustered index" is misleading. Let me turn things around...
InnoDB really needs a PRIMARY KEY.
A PK is (by definition in MySQL) UNIQUE.
There can be only one PK for a table.
The PK can be a "natural" key composed of one (or more) columns that 'naturally' work.
If you don't have a "natural" choice, then use id INT UNSIGNED NOT NULL AUTO_INCREMENT.
The PK and the data are stored in the same BTree. (Actually a B+Tree.) This leads to "the PK is clustered with the data".
The real question is not whether something is clustered, but whether it is cached in RAM. (Remember the 10x RoT.)
If the table is small, it will stay in cache (once all its blocks are touched), hence avoid disk hits.
If some subset of a huge table is "hot", it will tend to stay in cache.
If you must access a huge table "randomly", you will suffer a slowdown due to lots of disk hits. (This happens when using UUIDs as PRIMARY KEY or other type of INDEX.)
How the database decide when to process the data in-memory, and when not?
That's 'wrong', too. All processing is in memory. On a block-by-block basis, pieces of the tables and indexes are moved into / out of the buffer_pool. A block (in InnoDB) is 16KB. And the buffer_pool is a "cache" of such blocks.
SELECT * FROM Employee;
is simple, but costly. It operates thus:
"Open" table Employee (if not already open -- a different 'cache' handles this).
Go to the start of the table. This involves drilling down the left side of the PK's BTree to the first leaf node (block). And fetch it into the buffer_pool if not already cached.
Read a row -- this will be in that leaf node.
Read next row -- this is probably in the same block. If not, get the 'next' block (read from disk if necessary).
Repeat step 4 until finished with the table.
Things get more interesting if you have a WHERE clause. And then it depends on whether the PK or some other INDEX is involved.
Etc, etc.

Insertion performance degrade with large index (MYSQL)

Recently, i found that one of the server have high I/O traffic on disk. The high I/O due to the writing of index on certain table after some diagnostics. I have done several evaluation test and found that mysql take high number of write when inserting records to the table which have a large index.
The Data type of indexed columns is varchar(15) and varchar(17) ,both are non-unique index
there is only 80 writes on disk if i load 20000 records to the table which has 10000 records whereas there are 1700 writes on disk when table grow to 20 millions (which got about 1 millions distinct values on indexed columns)
even the number of records being inserted is the same.
Engine is MyISAM.
Increasing the size of the indexes also increasing number of write on disk per insert.
Is it the BTREE index behavior and how can i solve this issue?
Use InnoDB instead of MyISAM.
InnoDB helps by buffering writes to secondary indexes, merging them if possible, and delaying the expensive I/O. You can read more about this feature in the MySQL Manual under Controlling InnoDB Change Buffering.
Re your comment:
Inserting a new value into a B-Tree can be expensive. If there's no room at the leaf level, the insertion may cause a cascading effect of splitting the non-leaf nodes of the tree, potentially all the way up to the top of the tree. That can cause a lot of I/O, since different nodes of the tree may be stored far apart from one another on disk.
Other mitigating strategies are to make the table smaller, by moving less-used data to another table. Or by using MySQL table partitioning to make the one logical table comprised of many individual physical tables. Each such sub-table must have the same indexes, but then each individual index will be smaller.
There's an animated example here:
http://www.bluerwhite.org/btree/
Look at the example "Inserting Key 33 into a B-Tree (w/ Split)" where it shows the steps of inserting a value into a B-tree node that overfills it, and what the B-tree does in response.
Now imagine that the example illustration only shows the bottom part of a B-tree that is much deeper (as would be the case if your index B-tree has millions of entries), and filling the parent node can itself be an overflow, and force the splitting operation to continue up the the higher level in the tree. This can continue all the way to the very top of the tree if all the ancestor nodes to the top of the tree were already filled.