I have studied to improve performances of a database they can be set 2 type of organization: primary organization and secondary organization.
The first sets how to physically save the file records, the seconds creates some indexes to improve the access to the records.
Now I know how to create some indexes in MySQL, I'm not talking about it, but I would like to know how to tell MySQL how physically store the records according to an attribute to create one of the following:
- file heap
- sorted file
- hash file
Is there a way?
MySQL stores everything in pages, in groups of extent size. There is no configuration for this beyond page size.
See: File Space Management
Only memory tables support an index type other than B-tree. Memory tables also support a hash index.
Note that hash indexes only support equal/not equal searches.
See: Comparison of B-Tree and Hash Indexes
Basic table creation sytax. Chapter 14 Storage Engines
Here are a couple of examples straight from the manual:
CREATE TABLE t (i INT) ENGINE = INNODB;
or
CREATE TABLE t (i INT) ENGINE = HEAP;
This post says:
If you’re running Innodb Plugin on Percona Server with XtraDB you get
benefit of a great new feature – ability to build indexes by sort
instead of via insertion
However I could not find any info on this. I'd like to have an ability to reorganize how a table is laid out physically, similar to Postgre CLUSTER command, or MyISAM "alter table ... order by". For example table "posts" has millions of rows in random insertion order, most queries use "where userid = " and I want the table to have rows belonging to one user physically separated nearby on disk, so that common queries require low IO. Is it possible with XtraDB?
Clarification concerning the blog post
The feature you are basically looking at is fast index creation. This features speeds up the creation of secondary indexes to InnoDB tables, but it is only used in very specific cases. For example the feature is not used while OPTIMIZE TABLE, which can therefore be dramatically speed up by dropping the indexes first, then run OPTIMIZE TABLE and then recreate the indexes with fast index creation (about this was the post you linked).
Some kind of automation for the cases, which can be improved by using this feature manually like above, was added to Percona Server as a system variable named expand_fast_index_creation. If activated, the server should use fast index creation not only in the very specific cases, but in all cases it might help, such as OPTIMIZE TABLE — the problem mentioned in the linked blog article.
Concerning your question
Your question was actually if it is possible to save InnoDB tables in a custom order to speed up specific kind of queries by exploiting locality on the disk.
This is not possible. InnoDB rows are saved in pages, based on the clustered index (which is essentially the primary key). The rows/pages might be in chaotic ordering, for which one can OPTIMIZE TABLE the InnoDB table. With this command the table is actually recreated in primary key order. This allows to gather primary key local rows on the same or neighboring pages.
That is all you can force InnoDB to do. You can read the manual about clustered index, another page in the manual as a definite answer that this is not possible ("ORDER BY does not make sense for InnoDB tables because InnoDB always orders table rows according to the clustered index.") and the same question on dba.stackexchange which answers might interest you.
The Amazon RDS Customer Data Import Guide for MySQL (written in 2009) provides the following tip to decrease load times for MySQL -
Create all secondary indexes prior to loading. This is counterintuitive for those familiar with other databases. Adding or modifying a secondary index causes MySQL to create a new table with the index changes, copy the data from the existing table to the new table, and drop the original table.
However, there are several articles and stackoverflow posts from 2010+ that provide performance tests showing that creating indexes after loading is more performant. Where did this come from and did it just apply to an older version of MySQL? If so, please provide exact details. Or, does it still apply is specific cases?
The AWS recommendation to put secondary indexes in place before loading the data applied to older MySQL versions (< 5.5) because of the way secondary indexes were handled:
From the MySQL 5.5 docs:
Creating and dropping secondary indexes has traditionally involved
significant overhead from copying all the data in the InnoDB table.
The fast index creation feature of the InnoDB Plugin makes both CREATE
INDEX and DROP INDEX statements much faster for InnoDB secondary
indexes.
MySQL offers the following recommendation in the 5.5 documentation:
Because index maintenance can add performance overhead to many data
transfer operations, consider doing operations such as ALTER TABLE ...
ENGINE=INNODB or INSERT INTO ... SELECT * FROM ... without any
secondary indexes in place, and creating the indexes afterward.
If you use MySQL 5.5 or higher with AWS, you can take advantage of the fast Fast Index Creation feature that significantly speeds up secondary indexes creation.
Fast Index Creation is a capability first introduced in the InnoDB Plugin, now part of the MySQL server in 5.5 and higher, that speeds up
creation of InnoDB secondary indexes by avoiding the need to
completely rewrite the associated table. The speedup applies to
dropping secondary indexes also.
I am creating an asp.net *MVC* application using EF code first. I had used Sql azure as my database. But it turns out Sql Azure is not reliable. So I am thinking of using MySql/PostgreSQL for database.
I wanted to know the repercussions/implications of using EF code first with MySql/PostgreSQL in regards of performance.
Has anyone used this combo in production or knows anyone who has used it?
EDIT
I keep on getting following exceptions in Sql Azure.
SqlException: "*A transport-level error has occurred when receiving results from the server.*
(provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.)"
SqlException: *"Database 'XXXXXXXXXXXXXXXX' on server 'XXXXXXXXXXXXXXXX' is not
currently available. Please retry the connection later.* If the problem persists, contact
customer support, and provide them the session tracing ID of '4acac87a-bfbe-4ab1-bbb6c-4b81fb315da'.
Login failed for user 'XXXXXXXXXXXXXXXX'."
First your problem seems to be a network issue, perhaps with your ISP. You may want to look at getting a remote PostgreSQL or MySQL db I think you will run into the same problems.
Secondly comparing MySQL and PostgreSQL performance is relatively tricky. In general, MySQL is optimized for pkey lookups, and PostgreSQL is more generally optimized for complex use cases. This may be a bit low-level but....
MySQL InnoDB tables are basically btree indexes where the leaf note includes the table data. The primary key is the key of the index. If no primary key is provided, one will be created for you. This means two things:
select * from my_large_table will be slow as there is no support for a physical order scan.
Select * from my_large_table where secondary_index_value = 2 requires two index traversals sinc ethe secondary index an only refer to the primary key values.
In contrast a selection for a primary key value will be faster than on PostgreSQL because the index contains the data.
PostgreSQL by comparison stores information in an unordered way in a series of heap pages. The indexes are separate from the data. If you want to pull by primary key you scan the index, then read the data page in which the data is found, and then pull the data. In comparison, if you pull from a secondary index, this is not any slower. Additionally, the tables are structured such that sequential disk access is possible when doing a long select * from my_large_table will result in the operating system read-ahead cache being able to speed performance significantly.
In short, if your queries are simply joinless selection by primary key, then MySQL will give you better performance. If you have joins and such, PostgreSQL will do better.
I've got an index on columns a VARCHAR(255), b INT in an InnoDB table. Given two a,b pairs, can I use the MySQL index to determine if the pairs are the same from a c program (i.e. without using a strcmp and numerical comparison)?
Where is a MySQL InnoDB index stored in the file system?
Can it be read and used from a separate program? What is the format?
How can I use an index to determine if two keys are the same?
Note: An answer to this question should either a) provide a method for accessing a MySQL index in order to accomplish this task or b) explain why the MySQL index cannot practically be accessed/used in this way. A platform-specific answer is fine, and I'm on Red Hat 5.8.
Below is the previous version of this question, which provides more context but seems to distract from the actual question. I understand that there are other ways to accomplish this example within MySQL, and I provide two. This is not a question about optimization, but rather of factoring out a piece of complexity that exists across many different dynamically generated queries.
I could accomplish my query using a subselect with a subgrouping, e.g.
SELECT c, AVG(max_val)
FROM (
SELECT c, MAX(val) AS max_val
FROM table
GROUP BY a, b) AS t
GROUP BY c
But I've written a UDF that allows me to do it with a single select, e.g.
SELECT b, MY_UDF(a, b, val)
FROM table
GROUP by c
The key here is that I pass the fields a and b to the UDF, and I manually manage a,b subgroups in each group. Column a is a varchar, so this involves a call to strncmp to check for matches, but it's reasonably fast.
However, I have an index my_key (a ASC, b ASC). Instead of checking for matches on a and b manually, can I just access and use the MySQL index? That is, can I get the index value in my_key for a given row or a,b pair in c (inside the UDF)? And if so, would the index value be guaranteed to be unique for any value a,b?
I would like to call MY_UDF(a, b, val) and then look up the mysql index value (a,b) in c from the UDF.
Look back at your original query
SELECT c, AVG(max_val)
FROM
(
SELECT c, MAX(val) AS max_val
FROM table
GROUP BY a, b
) AS t
GROUP BY c;
You should first make sure the subselect gives you what you want by running
SELECT c, MAX(val) AS max_val
FROM table
GROUP BY a, b;
If the result of the subselect is correct, then run your full query. If that result is correct, then you should do the following:
ALTER TABLE `table` ADD INDEX abc_ndx (a,b,c,val);
This will speed up the query by getting all needed data from the index only. The source table never needs to be consulted.
Writing a UDF is and calling it a single SELECT is just masquerading a subselect and creating more overhead than the query needs. Simply placing your full query (one nested pass over the data) in the Stored Procedure will be more effective that getting most of the data in the UDF and executing single row selects iteratively ( something like O(n log n) running time with possible longer Sending data states).
UPDATE 2012-11-27 13:46 EDT
You can access the index without touching the table by doing two things
Create a decent Covering Index
ALTER TABLE table ADD INDEX abc_ndx (a,b,c,val);
Run the SELECT query I mentioned before
Since the all the columns of the query all in the index, the Query Optimizer will only touch the index (or precache index pages). If the table is MyISAM, you can ...
setup the MyISAM table to have a dedicated key cache that can be preloaded on mysqld startup
run SELECT a,b,c,val FROM table; to load index pages into MyISAM's default keycache
Trust me, you really do not want to access index pages against mysqld's will. What do I mean by that?
For MyISAM, the index pages for a MyISAM table are stored in the .MYI file of the table. Each DML statement will summon a full table lock.
For InnoDB, the index pages are loaded into the InnoDB Buffer Pool. Consequently, the associated data pages will load into the InnoDB Buffer Pool as well.
You should not have to circumvent access to index pages using Python, Perl, PHP, C++, or Java because of the constant I/O needed by MyISAM or the constant MVCC protocols being exercised by InnoDB.
There is a NoSQL paradigm (called HandlerSocket) that would permit low-level access to MySQL tables that can cleanly bypass mysqld's normal access patterns. I would not recommend it since there was a bug in it when using it to issue writes.
UPDATE 2012-11-30 12:11 EDT
From your last comment
I'm using InnoDB, and I can see how the MVCC model complicates things. However, apparently InnoDB stores only one version (the most recent) in the index. The access pattern for the relevant tables is write-once, read-many, so if the index could be accessed, it could provide a single, reliable datum for each key.
When it comes to InnoDB, MVCC is not complicating anything. It can actually become your best friend provided:
if you have autocommit enabled (It should be enabled by default)
the access pattern for the relevant tables is write-once, read-many
I would expect the accessed index pages to be sitting in the InnoDB Buffer Pool virtually forever if it is read repeatedly. I would just make sure your innodb_buffer_pool_size is set high enough to hold necessary InnoDB data.
If you just want to access an index outside of MySQL, you will have to use the API for one of the MySQL storage engines. The default engine is InnoDB. See overview here: InnoDB Internals. This describes (at a very high level) both the data layout on disk and the APIs to access it. A more detailed description is here: Embedded InnoDB.
However, rather than write your own program that uses InnoDB APIs directly (which is a lot of work), you might use one of the projects that have already done that work:
HandlerSocket: gives NoSQL access to InnoDB tables, runs in a UDF. See a very informative blog post from the developer. The goal of HandlerSocket is to provide a NoSQL interface exposed as a network daemon, but you could use the same technique (and much of the same code) to provide something that would be used by a query withing MySQL.
memcached InnoDB plugin. gives memcached style access to InnoDB tables.
HailDB: gives NoSQL access to InnoDB tables, runs on top of Embedded InnoDB. see conference presentation. EDIT: HailDB probably won't work running side-by-side with MySQL.
I believe any of these can run side-by-side with MySQL (using the same tables live), and can be used from C, so they do meet your requirements.
If you can use/migrate to MySQL Cluster, see also NDB API, a direct API, and ndbmemcache, a way to access MySQL Cluster using memcache API.
This is hard to answer without knowing why you are trying to do this, because the implications of different approaches are very different.
You probably cannot access the key directly.
I don't think this would actually make any difference performance-wise.
If you set covering indizes in the right order MySQL will not fetch a single page from the hard disk but deliver the result directly out of the index. There's nothing faster than this.
Note that your subselect may end up in a temptable on disk if its result is getting larger than your tmp_table_size or max_heap_table_size.
Check the status of Created_tmp_tables_disk_tables if you're not sure.
More on how MySQL is using internal temporary tables you find here
http://dev.mysql.com/doc/refman/5.5/en/internal-temporary-tables.html
If you want, post your table structure for a review.
No. There is no practical way to make use of a MySQL index, from within a C program, accessing a MySQL index in a means other than the MySQL engine, to check whether two (a,b) pairs (keys) are the same or not.
There are more practical solutions which don't require accessing MySQL datafiles outside of the MySQL engine or writing a user-defined function.
Q: Do you know where the mysql index is stored in the file system?
The location the index within the file system is going to depend on the storage engine for the table. For MyISAM engine, the indexes are stored in .MYI files under the datadir/database directory; InnoDB indexes are stored within an InnoDB managed tablespace file. f innodb_file_per_table variable was set when the table was created, there will be a separate .ibd file for each table under the innodb_data_home_dir/database subdirectory.
Q: Do you know what the format is?
The storage format of each storage engine is different, MyISAM, InnoDB, et al., and also depends on the version. I have some familiarity with how the data is stored, in terms of what MySQL requires of the storage engine. Detailed information about the internals would be specific to each engine.
Q: What makes it impractical?
It's impractical because it's a whole lot of work, and it's going to be dependent on details of storage engines that are likely to change in the future. It would be much more practical to define the problem space, and to write a SQL statement that would return what you want.
As Quassnoi pointed out in his comment to your question, it's not at all clear what particular problem you are trying to solve by creating a UDF or accessing MySQL indexes from outside of MySQL. I'm certain that Quassnoi would have a good way to accomplish what you need with an efficient SQL statement.