We have a very huge Mysql table which is MyISAM. Whenever we run optimize table command, the table is locked and performance is getting impacted. The table is not read only and hence creating temporary tables and swapping them may not work out. We are not able to partition the table also.
Is there any other way/tool to achieve optimize table functionality without degrading the performance. Any suggestion would be of great help.
Thanks in advance.
http://dev.mysql.com/doc/refman/5.5/en/optimize-table.html
For InnoDB tables, OPTIMIZE TABLE is mapped to ALTER TABLE, which
rebuilds the table (...)
Therefore, I would not expect any improvement in switching to InnoDB, as Quassnoi probably suggests.
By definition, OPTIMIZE TABLE needs some exclusive access to the table, hence the degraded performances during OPTIMIZE'ation
Nevertheless, there could be some steps to take to reduce the time taken by OPTIMIZE, depending on how your table is "huge" :
if your table has many fields, your table might need to be normalized. Conversely, you might want to de-normalize your table by spreading your columns into several "narrower" tables, and establish one-to-one relations.
if your table has many records, implement a "manual" partitionning in your application code. A simple step would be to create an "archive" table that holds rarely updated records. This way you only need to optimize a smaller set of records (the non-archive table).
optimize table command lock the table,it decrease the performance.
you download percona tool kit command to optimize table.
this command not lock the table during optimize table.
use below link :
https://www.percona.com/doc/percona-toolkit/2.1/pt-online-schema-change.html
Related
I was working on table which has near about 50 million data(2GB-size). I had requirement to optimize the performance. So when I add index on column through phpmyadmin panel, table got lock and result in holding up all queries in queue on that table and ultimately results in restart/kill all queries. (And yeah, I forgot to mention I was doing this on production. My bad!)
When I did some research I found out some solution like creating duplicate table but any alternative method ?
You may follow this steps,
Create a temp table
Creates triggers on the first table (for
inserts, updates, deletes) so that they are replicated to the temp
table
In small batches, migrate data When done, rename table to new
table, and drop the other table
But as you said you are doing it in production then you need to consider live traffic while dropping a table and creating another one
I have a table in my MySQL database with round 5M rows. Inserting rows to the table is too slow as MySQL updates index while inserting. How to stop index updating while inserting and do the indexing separately later?
Thanks
Kamrul
Sounds like your table might be over indexed. Maybe post your table definition here so we can have a look.
You have two choices:
Keep current indexes and remove unused indexes. If you have 3 indexes on a table for every single write to the table there will be 3 writes to the indexes. A index is only helpful during reads so you might want to remove unused indexes. During a load indexes will be updated which will slow down your load.
Drop you indexes before load then recreate them after load. You can drop your indexes before data load then insert and rebuild. The rebuild might take longer than the slow inserts. You will have to rebuild all indexes one by one. Also unique indexes can fail if duplicates are loaded during the load process without the indexes.
Now I suggest you take a good look at the indexes on the table and reduce them if they are not used in any queries. Then try both approaches and see what works for you. There is no way I know of in MySQL to disable indexes as they need the values insert to be written to their internal structures.
Another thing you might want to try it to split the IO over multiple drives i.e partition your table over several drives to get some hardware performance in place.
MySQL MyISAM "Table1" having 70% select , 13% update and 0.67% insert statements approximate.
There is one "count_column(int)" which used to increase count with primary key.(Update statements)
Updating of "count_column" make table select queries in "Waiting for table level lock"
So separating "count_column" in other table will reduce "Waiting for table level lock" or not?
I also need separated column in select statements with join.
Thanks, Yogs
AFAIK your locking problem is the COUNT with INSERT, not the UPDATE itself - but you must have a huge bunch of SELECTs. Your question is lacking quite some details...
COUNT is really optimized on MyISAM tables, if you encounter problems with that you maybe should consider a count estimate or memory tables holding this value:-\ But an exact row count is stored for MyISAM that is extremely quick to get by the storage engine, so you maybe even slowed down MySQL with your solution. "Slow" COUNT is valid for engines like InnoDB because of their transactional nature.
One other thing to consider is, that storing a count in a column in the table itself is an additional column for each row and quite bad.
And if you are using triggers to accomplish that you should be aware of http://dev.mysql.com/doc/refman/5.0/en/faqs-triggers.html#qandaitem-B-5-1-12 :)
Moving the frequently updated cells in another table will greatly reduce number of locks on the table and speed up select on it. Converting the table to InnoDB also can help (if you are not using full-text indexes, they are still not supported in MySQL 5.5 InnoDB), since it uses row-level locks instead of table-level. If you have a lot of queries, take a look at this article about implementing efficient counters
We have a huge database and inserting a new column is taking too long. Anyway to speed up things?
Unfortunately, there's probably not much you can do. When inserting a new column, MySQL makes a copy of the table and inserts the new data there. You may find it faster to do
CREATE TABLE new_table LIKE old_table;
ALTER TABLE new_table ADD COLUMN (column definition);
INSERT INTO new_table(old columns) SELECT * FROM old_table;
RENAME table old_table TO tmp, new_table TO old_table;
DROP TABLE tmp;
This hasn't been my experience, but I've heard others have had success. You could also try disabling indices on new_table before the insert and re-enabling later. Note that in this case, you need to be careful not to lose any data which may be inserted into old_table during the transition.
Alternatively, if your concern is impacting users during the change, check out pt-online-schema-change which makes clever use of triggers to execute ALTER TABLE statements while keeping the table being modified available. (Note that this won't speed up the process however.)
There are four main things that you can do to make this faster:
If using innodb_file_per_table the original table may be highly fragmented in the filesystem, so you can try defragmenting it first.
Make the buffer pool as big as sensible, so more of the data, particularly the secondary indexes, fits in it.
Make innodb_io_capacity high enough, perhaps higher than usual, so that insert buffer merging and flushing of modified pages will happen more quickly. Requires MySQL 5.1 with InnoDB plugin or 5.5 and later.
MySQL 5.1 with InnoDB plugin and MySQL 5.5 and later support fast alter table. One of the things that makes a lot faster is adding or rebuilding indexes that are both not unique and not in a foreign key. So you can do this:
A. ALTER TABLE ADD your column, DROP your non-unique indexes that aren't in FKs.
B. ALTER TABLE ADD back your non-unique, non-FK indexes.
This should provide these benefits:
a. Less use of the buffer pool during step A because the buffer pool will only need to hold some of the indexes, the ones that are unique or in FKs. Indexes are randomly updated during this step so performance becomes much worse if they don't fully fit in the buffer pool. So more chance of your rebuild staying fast.
b. The fast alter table rebuilds the index by sorting the entries then building the index. This is faster and also produces an index with a higher page fill factor, so it'll be smaller and faster to start with.
The main disadvantage is that this is in two steps and after the first one you won't have some indexes that may be required for good performance. If that is a problem you can try the copy to a new table approach, using just the unique and FK indexes at first for the new table, then adding the non-unique ones later.
It's only in MySQL 5.6 but the feature request in http://bugs.mysql.com/bug.php?id=59214 increases the speed with which insert buffer changes are flushed to disk and limits how much space it can take in the buffer pool. This can be a performance limit for big jobs. the insert buffer is used to cache changes to secondary index pages.
We know that this is still frustratingly slow sometimes and that a true online alter table is very highly desirable
This is my personal opinion. For an official Oracle view, contact an Oracle public relations person.
James Day, MySQL Senior Principal Support Engineer, Oracle
usually new line insert means that there are many indexes.. so I would suggest reconsidering indexing.
Michael's solution may speed things up a bit, but perhaps you should have a look at the database and try to break the big table into smaller ones. Take a look at this: link. Normalizing your database tables may save you loads of time in the future.
We have a large MyISAM table that is used to archive old data. This archiving is performed every month, and except from these occasions data is never written to the table. Is there anyway to "tell" MySQL that this table is read-only, so that MySQL might optimize the performance of reads from this table? I've looked at the MEMORY storage engine, but the problem is that this table is so large that it would take a large portion of the servers memory, which I don't want.
Hope my question is clear enough, I'm a novice when it comes to db administration so any input or suggestions are welcome.
Instead of un-and re-compressing the history table: If you want to access a single table for the history, you can use a merge table to combine the compressed read-only history tables.
Thus assuming you have an active table and the compressed history tables with the same table structure, you could use the following scheme:
The tables:
compressed_month_1
compressed_month_2
active_month
Create a merge table:
create table history_merge like active_month;
alter table history_merge
ENGINE=MRG_MyISAM
union (compressed_month_1,compressed_month_2);
After a month, compress the active_month table and rename it to compressed_month_3. Now the tables are:
compressed_month_1
compressed_month_2
compressed_month_3
active_month
and you can update the history table
alter table history_merge
union (compressed_month_1, compressed_month_2, compressed_month_3);
Yes, you can compress the myisam tables.
Here is the doc from 5.0 : http://dev.mysql.com/doc/refman/5.0/en/myisampack.html
You could use myisampack to generate fast, compressed, read-only tables.
(Not really sure if that hurts performance if you have to return most of the rows; testing is advisable; there could be a trade-off between compression and disk reads).
I'd say: also certainly apply the usual:
Provide appropriate indexes (based on the most used queries)
Have a look at clustering the data (again if this is useful given the queries)