I have a MyISAM table with 125M records. I added 25M more records to it via:
ALTER TABLE x DISABLE KEYS;
INSERT INTO x SELECT * FROM y;
ALTER TABLE x ENABLE KEYS;
Currently ALTER TABLE x ENABLE KEYS is in the "Repair with Keycache" state. How fast is this repair operation? Is it at least as fast as the case if I didn't disable the index and let rows be added with indexes updated on the fly or is it slower?
If I kill the query now, DROP all the indexes and then re-create them again to force repair by sort (my buffer sizes are large enough) would I risk losing any data?
If you kill the query while the ALTER TABLE operation is in progress, you risk losing any data that was added to the table after the ALTER TABLE operation began. Additionally, dropping and re-creating the indexes on the table would also cause you to lose any data that was added to the table after the indexes were dropped.
In general, it is a good idea to avoid making any changes to the table while the ALTER TABLE operation is in progress. If you are concerned about the speed of the operation, it may be better to let it complete rather than trying to interrupt it.
In terms of the speed of the operation, whether or not you disable the keys before inserting new rows into the table will not affect the speed of the ALTER TABLE operation. The ALTER TABLE operation will run at roughly the same speed regardless of whether the keys were disabled or not. The speed of the operation will depend on several factors, including the size of the table and the speed of the underlying hardware.
If you want to force the ALTER TABLE operation to use the "Repair by sort" method, you can do so by setting the myisam_repair_threads system variable to a value greater than 1 before running the ALTER TABLE operation. This will cause the operation to use the "Repair by sort" method, which may be faster in some cases. However, keep in mind that this will also use more system resources, so it may not be appropriate for all situations.
It is always a good idea to back up your data before making any changes to a table, in case something goes wrong. This way, you can restore the data from the backup if needed.
Related
Is there a way to perform ALTER TABLE in MySQL, telling the server to skip creating a backup of the table first? I have a backup of the table already and I'm doing some tests on it (adding indexes), so I don't care if the table gets corrupted in the process. I'll just restore it from the backup. But what I do care about is for the ALTER TABLE to finish quickly, so I can see the test results.
Given that I have a big MyISAM table (700 GB) it really isn't an option to wait for couple of hours so that MySQL can first finish creating a backup of the original table, before actually adding an index to it.
It's not doing a backup; it is building the new version. (The existing table serves as a backup in case of a crash.)
With InnoDB, there are many flavors of ALTER TABLE -- some of which take essentially zero time, regardless of the size of the table. MyISAM (mostly) does the brute force way: Create an empty table with the new schema; copy all the data and build all the indexes; swap tables. For some alters, InnoDB must also do the brute force way: Example changing the PRIMARY KEY.
Back then when i was working heavily with MyISAM Tables i always had a cronjob which ran
~# mysqlanalyze -o database
I know that MyISAM benefit from this in certain ways e.g.: fragmentation and whatnot
Now, when running the same command on a databse where the majority of tables is InnoDB i wonder if this "does any good" to the tables and is considered a good practice to do so every now and then or if its rather counter productive. Reading alot of :
Table does not support optimize, doing recreate + analyze instead
Which sounds expensive with regards to Disk IO / CPU time ?!
would appreciate some input on this.
https://dev.mysql.com/doc/refman/8.0/en/optimize-table.html says:
For InnoDB tables, OPTIMIZE TABLE is mapped to ALTER TABLE ... FORCE, which rebuilds the table to update index statistics and free unused space in the clustered index.
This does do some good in cases when you had too much fragmentation. Pages will be filled more efficiently, indexes will be rebuilt, and disk space occupied by the table will be reduced if you use innodb_file_per_table (which is the default in recent versions).
It does take time, depending on the size of your table. It will lock the table while it's running. It will require extra disk space while it's running, as it creates a copy of the table.
Doing optimize table on an InnoDB table is usually not necessary to do frequently, but only after you do a lot of insert/update/delete against the table in a way that could result in fragmentation.
ANALYZE TABLE is much less impact for InnoDB. This doesn't require building a copy of the table. It's a read-only action, it just reads a random sample of pages from the table and uses that to estimate the number of rows, average size of rows, and it update statistics about the indexes, to guide the query optimizer. This is safe to run anytime, it will lock that table for moment, but that won't be any greater regardless of the size of the table.
Don't bother. InnoDB almost never needs either ANALYZE or OPTIMIZE; don't waste your time unless you have identified a need.
An exception is a FULLTEXT index on an InnoDB table. Such can benefit from DROP INDEX, then ADD INDEX.
If you are "reloading" the table from new data, then the following avoids downtime:
CREATE TABLE new LIKE real;
load `new`
RENAME TABLE real TO old, new TO real; -- fast, atomic
DROP TABLE old;
(Caveat: The above technique probably has issues if there are FOREIGN KEYS.)
I have a MyISAM table that has to be cleaned once in a while (a total of ~5M rows out of 12M rows are being deleted).
Afterwards, I have to optimize table, and I know that OPTIMIZE TABLE goes faster if I drop indexes first.
The problem is,
ALTER TABLE t1 DISABLE KEYS;
--> here
OPTIMIZE TABLE t1;
--> or here
ALTER TABLE t1 ENABLE KEYS;
MySQL may decide to serve some other queries, it leads to multiple slow non-indexed table scans, delaying further steps.
So how do I lock the table for other threads?
So how do I lock the table for other threads?
You can use LOCK TABLES:
LOCK TABLES t1 WRITE;
But, as noted:
If you use ALTER TABLE on a locked table, it may become unlocked. For example, if you attempt a second ALTER TABLE operation, the result may be an error Table 'tbl_name' was not locked with LOCK TABLES. To handle this, lock the table again prior to the second alteration. See also Section C.5.7.1, “Problems with ALTER TABLE”.
MyISAM lacks transactional support, so this is impossible to do reliably (there are a lot of DDL-level operations that will unlock silently). That's why you should schedule maintenance like this at night so you don't catch the extra load in busy hours.
You have an XY problem though - you're asking about a self-devised solution to a problem you're having, instead of asking for a proper solution to the problem.
Alternative options to solve the underlying problem are:
Doing the cleanup far more frequently - in smaller batches the load shouldn't be really noticeable
Doing the OPTIMIZE step only at night and the cleanups frequently during the day.
Since you consider downtime acceptable apparently (as you suggest locking yourself) you could even consider renaming the table temporarily during the duration - it's a prehistoric way of assuring nobody else touches the table, but it'll work.
If you describe your actual problem in more detail we might be able to give better solutions.
I know I should use engine=MEMORY to make the table in memory and engine=INNODB to make the table transaction safe. However, how can I achieve both objectives? I tried engine=MEMORY, INNODB, but I failed. My purpose is to access tables fast and allow multiple threads to change contents of tables.
You haven't stated your goals above. I guess you're looking for good performance, and you also seem to want the table to be transactional. Your only option really is InnoDB. As long as you have configured InnoDB to use enough memory to hold your entire table (with innodb_buffer_pool_size), and there is not excessive pressure from other InnoDB tables on the same server, the data will remain in memory. If you're concerned about write performance (and again barring other uses of the same system) you can reduce durability to drastically increase write performance by setting innodb_flush_log_at_trx_commit = 0 and disabling binary logging.
Using any sort of triggers with temporary tables will be a mess to maintain, and won't give you any benefits of transactionality on the temporary tables.
You are asking for a way to create the table with 2 (or more) engines, that is not possible with mysql.
However, I will guess that you want to use memory because you don't think innodb will be fast enough for your need. I think innodb is pretty fast and will be probably enough, but if you really need it, I think you should try creating 2 tables:
table1 memory <-- here is where you will make all the SELECTs
table2 innodb <-- here you will make the UPDATE, INSERT, DELETE, etc and add a TRIGGER so when this one is updated, the table1 gets the same modification.
as i know the there are two ways
1st way
create a temp table as ( these are stored in memory with a small diff they will get deleted as the session is logged out )
create temporary table sample(id int) engine=Innodb;
2nd way
you have to create two tables one with memory engine and other with innodb or bdb
first insert all the data into your innodb table and then trigger the data to be copied into memory table
and if you want to empty the data in the innodb table you can do it with same trigger
you can achieve this using events also
We have a huge database and inserting a new column is taking too long. Anyway to speed up things?
Unfortunately, there's probably not much you can do. When inserting a new column, MySQL makes a copy of the table and inserts the new data there. You may find it faster to do
CREATE TABLE new_table LIKE old_table;
ALTER TABLE new_table ADD COLUMN (column definition);
INSERT INTO new_table(old columns) SELECT * FROM old_table;
RENAME table old_table TO tmp, new_table TO old_table;
DROP TABLE tmp;
This hasn't been my experience, but I've heard others have had success. You could also try disabling indices on new_table before the insert and re-enabling later. Note that in this case, you need to be careful not to lose any data which may be inserted into old_table during the transition.
Alternatively, if your concern is impacting users during the change, check out pt-online-schema-change which makes clever use of triggers to execute ALTER TABLE statements while keeping the table being modified available. (Note that this won't speed up the process however.)
There are four main things that you can do to make this faster:
If using innodb_file_per_table the original table may be highly fragmented in the filesystem, so you can try defragmenting it first.
Make the buffer pool as big as sensible, so more of the data, particularly the secondary indexes, fits in it.
Make innodb_io_capacity high enough, perhaps higher than usual, so that insert buffer merging and flushing of modified pages will happen more quickly. Requires MySQL 5.1 with InnoDB plugin or 5.5 and later.
MySQL 5.1 with InnoDB plugin and MySQL 5.5 and later support fast alter table. One of the things that makes a lot faster is adding or rebuilding indexes that are both not unique and not in a foreign key. So you can do this:
A. ALTER TABLE ADD your column, DROP your non-unique indexes that aren't in FKs.
B. ALTER TABLE ADD back your non-unique, non-FK indexes.
This should provide these benefits:
a. Less use of the buffer pool during step A because the buffer pool will only need to hold some of the indexes, the ones that are unique or in FKs. Indexes are randomly updated during this step so performance becomes much worse if they don't fully fit in the buffer pool. So more chance of your rebuild staying fast.
b. The fast alter table rebuilds the index by sorting the entries then building the index. This is faster and also produces an index with a higher page fill factor, so it'll be smaller and faster to start with.
The main disadvantage is that this is in two steps and after the first one you won't have some indexes that may be required for good performance. If that is a problem you can try the copy to a new table approach, using just the unique and FK indexes at first for the new table, then adding the non-unique ones later.
It's only in MySQL 5.6 but the feature request in http://bugs.mysql.com/bug.php?id=59214 increases the speed with which insert buffer changes are flushed to disk and limits how much space it can take in the buffer pool. This can be a performance limit for big jobs. the insert buffer is used to cache changes to secondary index pages.
We know that this is still frustratingly slow sometimes and that a true online alter table is very highly desirable
This is my personal opinion. For an official Oracle view, contact an Oracle public relations person.
James Day, MySQL Senior Principal Support Engineer, Oracle
usually new line insert means that there are many indexes.. so I would suggest reconsidering indexing.
Michael's solution may speed things up a bit, but perhaps you should have a look at the database and try to break the big table into smaller ones. Take a look at this: link. Normalizing your database tables may save you loads of time in the future.