I know I should use engine=MEMORY to make the table in memory and engine=INNODB to make the table transaction safe. However, how can I achieve both objectives? I tried engine=MEMORY, INNODB, but I failed. My purpose is to access tables fast and allow multiple threads to change contents of tables.
You haven't stated your goals above. I guess you're looking for good performance, and you also seem to want the table to be transactional. Your only option really is InnoDB. As long as you have configured InnoDB to use enough memory to hold your entire table (with innodb_buffer_pool_size), and there is not excessive pressure from other InnoDB tables on the same server, the data will remain in memory. If you're concerned about write performance (and again barring other uses of the same system) you can reduce durability to drastically increase write performance by setting innodb_flush_log_at_trx_commit = 0 and disabling binary logging.
Using any sort of triggers with temporary tables will be a mess to maintain, and won't give you any benefits of transactionality on the temporary tables.
You are asking for a way to create the table with 2 (or more) engines, that is not possible with mysql.
However, I will guess that you want to use memory because you don't think innodb will be fast enough for your need. I think innodb is pretty fast and will be probably enough, but if you really need it, I think you should try creating 2 tables:
table1 memory <-- here is where you will make all the SELECTs
table2 innodb <-- here you will make the UPDATE, INSERT, DELETE, etc and add a TRIGGER so when this one is updated, the table1 gets the same modification.
as i know the there are two ways
1st way
create a temp table as ( these are stored in memory with a small diff they will get deleted as the session is logged out )
create temporary table sample(id int) engine=Innodb;
2nd way
you have to create two tables one with memory engine and other with innodb or bdb
first insert all the data into your innodb table and then trigger the data to be copied into memory table
and if you want to empty the data in the innodb table you can do it with same trigger
you can achieve this using events also
Related
Back then when i was working heavily with MyISAM Tables i always had a cronjob which ran
~# mysqlanalyze -o database
I know that MyISAM benefit from this in certain ways e.g.: fragmentation and whatnot
Now, when running the same command on a databse where the majority of tables is InnoDB i wonder if this "does any good" to the tables and is considered a good practice to do so every now and then or if its rather counter productive. Reading alot of :
Table does not support optimize, doing recreate + analyze instead
Which sounds expensive with regards to Disk IO / CPU time ?!
would appreciate some input on this.
https://dev.mysql.com/doc/refman/8.0/en/optimize-table.html says:
For InnoDB tables, OPTIMIZE TABLE is mapped to ALTER TABLE ... FORCE, which rebuilds the table to update index statistics and free unused space in the clustered index.
This does do some good in cases when you had too much fragmentation. Pages will be filled more efficiently, indexes will be rebuilt, and disk space occupied by the table will be reduced if you use innodb_file_per_table (which is the default in recent versions).
It does take time, depending on the size of your table. It will lock the table while it's running. It will require extra disk space while it's running, as it creates a copy of the table.
Doing optimize table on an InnoDB table is usually not necessary to do frequently, but only after you do a lot of insert/update/delete against the table in a way that could result in fragmentation.
ANALYZE TABLE is much less impact for InnoDB. This doesn't require building a copy of the table. It's a read-only action, it just reads a random sample of pages from the table and uses that to estimate the number of rows, average size of rows, and it update statistics about the indexes, to guide the query optimizer. This is safe to run anytime, it will lock that table for moment, but that won't be any greater regardless of the size of the table.
Don't bother. InnoDB almost never needs either ANALYZE or OPTIMIZE; don't waste your time unless you have identified a need.
An exception is a FULLTEXT index on an InnoDB table. Such can benefit from DROP INDEX, then ADD INDEX.
If you are "reloading" the table from new data, then the following avoids downtime:
CREATE TABLE new LIKE real;
load `new`
RENAME TABLE real TO old, new TO real; -- fast, atomic
DROP TABLE old;
(Caveat: The above technique probably has issues if there are FOREIGN KEYS.)
Instead of doing ALTER TABLE I prefer to create a new table, copy the data to it, and then move to use it. When doing so in InnoDB I always have a hard time performing:
INSERT INTO new_huge_tbl (SELECT * FROM old_huge_tbl)
Because of the natures of transactions, if at any time I need to stop this operation, the rollback isn't easy, to say the least. Is there any way I can perform this operation in InnoDB without it being a transaction?
No, it's not possible to avoid the transactional overhead in a simple way. You would perhaps have two options:
In your own application, use many smaller transactions (of e.g. 10k rows each) to copy the data in small batches.
Use an existing tool which does the copy for you using the same strategy. I could suggest pt-archiver from the Percona Toolkit.
Internally, when doing table copies for e.g. ALTER TABLE, InnoDB does in fact do exactly that, batching the copy into many smaller transactions.
We have a huge database and inserting a new column is taking too long. Anyway to speed up things?
Unfortunately, there's probably not much you can do. When inserting a new column, MySQL makes a copy of the table and inserts the new data there. You may find it faster to do
CREATE TABLE new_table LIKE old_table;
ALTER TABLE new_table ADD COLUMN (column definition);
INSERT INTO new_table(old columns) SELECT * FROM old_table;
RENAME table old_table TO tmp, new_table TO old_table;
DROP TABLE tmp;
This hasn't been my experience, but I've heard others have had success. You could also try disabling indices on new_table before the insert and re-enabling later. Note that in this case, you need to be careful not to lose any data which may be inserted into old_table during the transition.
Alternatively, if your concern is impacting users during the change, check out pt-online-schema-change which makes clever use of triggers to execute ALTER TABLE statements while keeping the table being modified available. (Note that this won't speed up the process however.)
There are four main things that you can do to make this faster:
If using innodb_file_per_table the original table may be highly fragmented in the filesystem, so you can try defragmenting it first.
Make the buffer pool as big as sensible, so more of the data, particularly the secondary indexes, fits in it.
Make innodb_io_capacity high enough, perhaps higher than usual, so that insert buffer merging and flushing of modified pages will happen more quickly. Requires MySQL 5.1 with InnoDB plugin or 5.5 and later.
MySQL 5.1 with InnoDB plugin and MySQL 5.5 and later support fast alter table. One of the things that makes a lot faster is adding or rebuilding indexes that are both not unique and not in a foreign key. So you can do this:
A. ALTER TABLE ADD your column, DROP your non-unique indexes that aren't in FKs.
B. ALTER TABLE ADD back your non-unique, non-FK indexes.
This should provide these benefits:
a. Less use of the buffer pool during step A because the buffer pool will only need to hold some of the indexes, the ones that are unique or in FKs. Indexes are randomly updated during this step so performance becomes much worse if they don't fully fit in the buffer pool. So more chance of your rebuild staying fast.
b. The fast alter table rebuilds the index by sorting the entries then building the index. This is faster and also produces an index with a higher page fill factor, so it'll be smaller and faster to start with.
The main disadvantage is that this is in two steps and after the first one you won't have some indexes that may be required for good performance. If that is a problem you can try the copy to a new table approach, using just the unique and FK indexes at first for the new table, then adding the non-unique ones later.
It's only in MySQL 5.6 but the feature request in http://bugs.mysql.com/bug.php?id=59214 increases the speed with which insert buffer changes are flushed to disk and limits how much space it can take in the buffer pool. This can be a performance limit for big jobs. the insert buffer is used to cache changes to secondary index pages.
We know that this is still frustratingly slow sometimes and that a true online alter table is very highly desirable
This is my personal opinion. For an official Oracle view, contact an Oracle public relations person.
James Day, MySQL Senior Principal Support Engineer, Oracle
usually new line insert means that there are many indexes.. so I would suggest reconsidering indexing.
Michael's solution may speed things up a bit, but perhaps you should have a look at the database and try to break the big table into smaller ones. Take a look at this: link. Normalizing your database tables may save you loads of time in the future.
Ran into an interesting problem with a MySQL table I was building as a temporary table for reporting purposes.
I found that if I didn't specify a storage engine, the DROP TEMPORARY TABLE command would hang for up to half a second.
If I defined my table as ENGINE = MEMORY this short hang would disappear.
As I have a solution to this problem (using MEMORY tables), my question is why would a temporary table take a long time to drop? Do they not use the MEMORY engine by default? It's not even a very big table, a couple of hundred rows with my current test data.
Temporary tables, by default, will be created where ever the mysql configuration tells it to, typically /tmp or somewhere else on a disk. You can set this location (and even multiple locations) to a RAM disk location such as /dev/shm.
Hope this helps!
If the temporary file is created with InnoDb engine, which may be the case if your default engine was InnoDb, and the InnoDb buffer pool is large, DROP TEMPORARY TABLE may take some time since it needs to scan all pages to discard.
It was mentionned in a comment to this stack overflow question.
Note also that DROP (TEMPORARY) TABLE uses a LOCK that may have huge impact on all your server. See for example this.
At my work, we recently had a server slow down because we had an InnoDb buffer pool of 80 Gb and some SQL requests had been optimized using InnoDb temporary tables.
About 100 such DROP TEMPORARY TABLE requests every 5 minutes were sufficient to have a huge impact. And the problem was hard to debug since slow query log would tell us that UPDATEs of a single row accessed by primary key in some other table was taking two seconds, and there was an enormous amount of such updates. But even if most query time was spent on these updates, the problem was really because of the DROP TEMPORARY TABLE requests.
We have a large MyISAM table that is used to archive old data. This archiving is performed every month, and except from these occasions data is never written to the table. Is there anyway to "tell" MySQL that this table is read-only, so that MySQL might optimize the performance of reads from this table? I've looked at the MEMORY storage engine, but the problem is that this table is so large that it would take a large portion of the servers memory, which I don't want.
Hope my question is clear enough, I'm a novice when it comes to db administration so any input or suggestions are welcome.
Instead of un-and re-compressing the history table: If you want to access a single table for the history, you can use a merge table to combine the compressed read-only history tables.
Thus assuming you have an active table and the compressed history tables with the same table structure, you could use the following scheme:
The tables:
compressed_month_1
compressed_month_2
active_month
Create a merge table:
create table history_merge like active_month;
alter table history_merge
ENGINE=MRG_MyISAM
union (compressed_month_1,compressed_month_2);
After a month, compress the active_month table and rename it to compressed_month_3. Now the tables are:
compressed_month_1
compressed_month_2
compressed_month_3
active_month
and you can update the history table
alter table history_merge
union (compressed_month_1, compressed_month_2, compressed_month_3);
Yes, you can compress the myisam tables.
Here is the doc from 5.0 : http://dev.mysql.com/doc/refman/5.0/en/myisampack.html
You could use myisampack to generate fast, compressed, read-only tables.
(Not really sure if that hurts performance if you have to return most of the rows; testing is advisable; there could be a trade-off between compression and disk reads).
I'd say: also certainly apply the usual:
Provide appropriate indexes (based on the most used queries)
Have a look at clustering the data (again if this is useful given the queries)