Ran into an interesting problem with a MySQL table I was building as a temporary table for reporting purposes.
I found that if I didn't specify a storage engine, the DROP TEMPORARY TABLE command would hang for up to half a second.
If I defined my table as ENGINE = MEMORY this short hang would disappear.
As I have a solution to this problem (using MEMORY tables), my question is why would a temporary table take a long time to drop? Do they not use the MEMORY engine by default? It's not even a very big table, a couple of hundred rows with my current test data.
Temporary tables, by default, will be created where ever the mysql configuration tells it to, typically /tmp or somewhere else on a disk. You can set this location (and even multiple locations) to a RAM disk location such as /dev/shm.
Hope this helps!
If the temporary file is created with InnoDb engine, which may be the case if your default engine was InnoDb, and the InnoDb buffer pool is large, DROP TEMPORARY TABLE may take some time since it needs to scan all pages to discard.
It was mentionned in a comment to this stack overflow question.
Note also that DROP (TEMPORARY) TABLE uses a LOCK that may have huge impact on all your server. See for example this.
At my work, we recently had a server slow down because we had an InnoDb buffer pool of 80 Gb and some SQL requests had been optimized using InnoDb temporary tables.
About 100 such DROP TEMPORARY TABLE requests every 5 minutes were sufficient to have a huge impact. And the problem was hard to debug since slow query log would tell us that UPDATEs of a single row accessed by primary key in some other table was taking two seconds, and there was an enormous amount of such updates. But even if most query time was spent on these updates, the problem was really because of the DROP TEMPORARY TABLE requests.
Related
Back then when i was working heavily with MyISAM Tables i always had a cronjob which ran
~# mysqlanalyze -o database
I know that MyISAM benefit from this in certain ways e.g.: fragmentation and whatnot
Now, when running the same command on a databse where the majority of tables is InnoDB i wonder if this "does any good" to the tables and is considered a good practice to do so every now and then or if its rather counter productive. Reading alot of :
Table does not support optimize, doing recreate + analyze instead
Which sounds expensive with regards to Disk IO / CPU time ?!
would appreciate some input on this.
https://dev.mysql.com/doc/refman/8.0/en/optimize-table.html says:
For InnoDB tables, OPTIMIZE TABLE is mapped to ALTER TABLE ... FORCE, which rebuilds the table to update index statistics and free unused space in the clustered index.
This does do some good in cases when you had too much fragmentation. Pages will be filled more efficiently, indexes will be rebuilt, and disk space occupied by the table will be reduced if you use innodb_file_per_table (which is the default in recent versions).
It does take time, depending on the size of your table. It will lock the table while it's running. It will require extra disk space while it's running, as it creates a copy of the table.
Doing optimize table on an InnoDB table is usually not necessary to do frequently, but only after you do a lot of insert/update/delete against the table in a way that could result in fragmentation.
ANALYZE TABLE is much less impact for InnoDB. This doesn't require building a copy of the table. It's a read-only action, it just reads a random sample of pages from the table and uses that to estimate the number of rows, average size of rows, and it update statistics about the indexes, to guide the query optimizer. This is safe to run anytime, it will lock that table for moment, but that won't be any greater regardless of the size of the table.
Don't bother. InnoDB almost never needs either ANALYZE or OPTIMIZE; don't waste your time unless you have identified a need.
An exception is a FULLTEXT index on an InnoDB table. Such can benefit from DROP INDEX, then ADD INDEX.
If you are "reloading" the table from new data, then the following avoids downtime:
CREATE TABLE new LIKE real;
load `new`
RENAME TABLE real TO old, new TO real; -- fast, atomic
DROP TABLE old;
(Caveat: The above technique probably has issues if there are FOREIGN KEYS.)
Right now, I'm using temporary tables in my select queries to speed up the execution. They are created every time I execute the query.
In my current situation, the tables are updated with new data only once per day, so I was thinking that instead of using MySQL's CREATE TEMPORARY TABLE statement, I'll create a persistent table, which in a sense would be temporary since it'd be deleted and recreated after a day. And I could fill it up with the temporary data just after I've finished updating the main tables.
Or, will InnoDB's data buffer will be smart enough to cache the data for temporary tables itself?
Or is there another way for caching the temporary tables?
I'm also sending along appropriate cache headers with the data loaded using AJAX to reduce server load, and AJAX queries make up about 70% of the read requests sent to mysql.
Is what I'm thinking just a plain waste of disk space and tables are never meant to be used in this fashion, or is it a really bright idea for my situation?
I came across a similar issue recently where the CREATE TEMPORARY TABLE came at a significant cost due to continual reuse. I also used the solution that Barranka describes (create once and truncate when finished or before reuse).
To increase performance even more I used InnoDB tables that were created on a RAM disk (ramfs). This gives all the benefits of the InnoDB storage engine with very little IO cost. This is a better solution than using the MEMORY storage engine which, according to Oracle support, is only available for legacy applications and has not been improved or extended for some time.
Maybe looking at MEMORY storage engine might help? I use these for some accept data from an intensive query once a day, where the MEMORY table is then used intensively for a short period of time.
http://dev.mysql.com/doc/refman/5.5/en/memory-storage-engine.html
I know I should use engine=MEMORY to make the table in memory and engine=INNODB to make the table transaction safe. However, how can I achieve both objectives? I tried engine=MEMORY, INNODB, but I failed. My purpose is to access tables fast and allow multiple threads to change contents of tables.
You haven't stated your goals above. I guess you're looking for good performance, and you also seem to want the table to be transactional. Your only option really is InnoDB. As long as you have configured InnoDB to use enough memory to hold your entire table (with innodb_buffer_pool_size), and there is not excessive pressure from other InnoDB tables on the same server, the data will remain in memory. If you're concerned about write performance (and again barring other uses of the same system) you can reduce durability to drastically increase write performance by setting innodb_flush_log_at_trx_commit = 0 and disabling binary logging.
Using any sort of triggers with temporary tables will be a mess to maintain, and won't give you any benefits of transactionality on the temporary tables.
You are asking for a way to create the table with 2 (or more) engines, that is not possible with mysql.
However, I will guess that you want to use memory because you don't think innodb will be fast enough for your need. I think innodb is pretty fast and will be probably enough, but if you really need it, I think you should try creating 2 tables:
table1 memory <-- here is where you will make all the SELECTs
table2 innodb <-- here you will make the UPDATE, INSERT, DELETE, etc and add a TRIGGER so when this one is updated, the table1 gets the same modification.
as i know the there are two ways
1st way
create a temp table as ( these are stored in memory with a small diff they will get deleted as the session is logged out )
create temporary table sample(id int) engine=Innodb;
2nd way
you have to create two tables one with memory engine and other with innodb or bdb
first insert all the data into your innodb table and then trigger the data to be copied into memory table
and if you want to empty the data in the innodb table you can do it with same trigger
you can achieve this using events also
I have a mysql table with over 30 million records that was originally being stored with myisam. Here is a description of the table:
I would run the following query against this table which would generally take around 30 seconds to complete. I would change #eid each time to avoid database or disk caching.
select count(fact_data.id)
from fact_data
where fact_data.entity_id=#eid
and fact_data.metric_id=1
I then converted this table to innoDB without making any other changes and afterwards the same query now returns in under a second every single time I run the query. Even when I randomly set #eid to avoid caching, the query returns in under a second.
I've been researching the differences between the two storage types to try to explain the dramatic improvement in performance but haven't been able to come up with anything. In fact, much of what I read indicates that Myisam should be faster.
The queries I'm running are against a local database with no other processes hitting the database at the time of the tests.
That's a surprisingly large performance difference, but I can think of a few things that may be contributing.
MyISAM has historically been viewed as faster than InnoDB, but for recent versions of InnoDB, that is true for a much, much smaller set of use cases. MyISAM is typically faster for table scans of read-only tables. In most other use cases, I typically find InnoDB to be faster. Often many times faster. Table locks are a death knell for MyISAM in most of my usage of MySQL.
MyISAM caches indexes in its key buffer. Perhaps you have set the key buffer too small for it to effectively cache the index for your somewhat large table.
MyISAM depends on the OS to cache table data from the .MYD files in the OS disk cache. If the OS is running low on memory, it will start dumping its disk cache. That could force it to keep reading from disk.
InnoDB caches both indexes and data in its own memory buffer. You can tell the OS not to also use its disk cache if you set innodb_flush_method to O_DIRECT, though this isn't supported on OS X.
InnoDB usually buffers data and indexes in 16kb pages. Depending on how you are changing the value of #eid between queries, it may have already cached the data for one query due to the disk reads from a previous query.
Make sure you created the indexes identically. Use explain to check if MySQL is using the index. Since you included the output of describe instead of show create table or show indexes from, I can't tell if entity_id is part of a composite index. If it was not the first part of a composite index, it wouldn't be used.
If you are using a relatively modern version of MySQL, run the following command before running the query:
set profiling = 1;
That will turn on query profiling for your session. After running the query, run
show profiles;
That will show you the list of queries for which profiles are available. I think it keeps the last 20 by default. Assuming your query was the first one, run:
show profile for query 1;
You will then see the duration of each stage in running your query. This is extremely useful for determining what (e.g., table locks, sorting, creating temp tables, etc.) is causing a query to be slow.
My first suspicion would be that the original MyISAM table and/or indexes became fragmented over time resulting in the performance slowly degrading. The InnoDB table would not have the same problem since you created it with all the data already in it (so it would all be stored sequentially on disk).
You could test this theory by rebuilding the MyISAM table. The easiest way to do this would be to use a "null" ALTER TABLE statement:
ALTER TABLE mytable ENGINE = MyISAM;
Then check the performance to see if it is better.
Another possibility would be if the database itself is simply tuned for InnoDB performance rather than MyISAM. For example, InnoDB uses the innodb_buffer_pool_size parameter to know how much memory should be allocated for storing cached data and indexes in memory. But MyISAM uses the key_buffer parameter. If your database has a large innodb buffer pool and a small key buffer, then InnoDB performance is going to be better than MyISAM performance, especially for large tables.
What are your index definitions, there are ways in which you can create indexes for MyISAM in which your index fields will not be used when you think they would.
Questions :
1 - What is mean by Overhead? When I click "Optimize table" button on MyISAM table, Overhead and Effective data are gone. I wonder what it does to my table?
2 - Do I need to care of Overhead and Effective value actually? How to fix the Overhead and Effective problem on InnoDB table?
Fixing InnoDB is not as trivial as a Click of a Button. MyISAM is.
Under the hood, OPTIMIZE TABLE will do this to a MyISAM table called mytb:
Create empty temp table with same structure as mytb
Copy MyISAM data from mytb into the temp table
Drop table mytb
Rename temp table to mytb
Run ANALYZE TABLE against mytb and store index statistics
OPTMIZE TABLE does not work that way with InnoDB for two major reasons:
REASON #1 : InnoDB Storage Layout
By default, InnoDB has innodb_file_per_table disabled. Everything InnoDB and its grandmother lands in ibdata1. Running OPTIMIZE TABLE does the following to an InnoDB table called mytb:
Create empty InnoDB temp table with same structure as mytb
Copy InnoDB data from mytb into the temp table
Drop table mytb
Rename temp table to mytb
Run ANALYZE TABLE against mytb and store index statistics
Unfortunately, the temp table used for shrinking mytb is appended to ibdata1. INSTANT GROWTH FOR ibdata1 !!! In light of this, ibdata1 will never shrink. To matters worse, ANALYZE TABLE is useless (Explained in REASON #2)
If you have innodb_file_per_table enabled, the first four(4) steps will work since the data is not stored in ibdata1, but in an external tablespace file called mytb.ibd. That can shrink.
REASON #2 : Index Statistics are Always Recomputed
InnoDB does not effectively store index statistics. In fact, if you run ANALYZE TABLE on mytb, statistics are created and stored. Unfortunately, by design, InnoDB will dive into the BTREE pages of its indexes, guessimate key cardinalities, and uses those numbers to prep the MySQL Query Optimizer. This is an on-going process. In effect, ANALYZE TABLE is useless because the index statistics calculated are overwritten with each query executed against that table. I wrote about this in the DBA StackExchange June 21, 2011.
Percona explained this thoroughly in www.mysqlperformanceblog.com
As for overhead in MyISAM, that number can be figured out.
For a MyISAM table, the overhead represents internal fragmentation. This is quite common in a table that experiences, INSERT, UPDATEs, and DELETEs, especially if you have BLOB data or VARCHAR columns. Running OPTIMIZE TABLE make such fragmentation disappear by copying to a temp table (naturally not copying empty space).
Going back to InnoDB, how do you effectively eliminate waste space ? You need to rearchitect ibdata1 to hold less info. Withing ibdata1 you have four types of data:
Table Data Pages
Index Data Pages
Table MetaData
MVCC Data for Transactions
You can permamnently move Table and Indexes Out of ibdata1 forever. What about data and indexes already housed in ibdata1 ?
Follow the InnoDB Cleanup Plan that I posted October 29, 2010 : Howto: Clean a mysql InnoDB storage engine?
In fact "OPTIMIZE TABLE" is a useless waste of time on MyISAM, because if you have to do it, your database is toast already.
It takes a very long time on large tables, and blocks write-access to the table while it does so. Moreover, it has very nasty effects on the myisam keycache etc.
So in summary
small tables never need "optimize table"
large tables can never use "optimize table" (or indeed MyISAM)
It is possible to achieve (roughly) the same thing in InnoDB simply using an ALTER TABLE statement which makes no schema changes (normally ALTER TABLE t ENGINE=InnoDB). It's not as quick as MyISAM, because it doesn't do the various small-table-which-fits-in-memory optimisations.
MyISAM also uses a bunch of index optimisations to compress index pages, which generally cause quite small indexes. InnoDB also doesn't have these.
If your database is small, you don't need it. If it's big, you can't really use MyISAM anyway (because an unplanned shutdown makes the table to need rebuilding, which takes too long on large tables). Just don't use MyISAM if you need durability, reliability, transactions, any level of concurrency, or generally care about robustness in any way.