Query timeout for changing storage engine of MySQL table - mysql

I have a table in MySQL having 700 million rows. I want to change its storage engine from InnoDB to MyISAM but the SQL query to do so is taking a lot of time to execute.
Below is the query :-
ALTER TABLE table ENGINE=MyISAM
In phpMyAdmin and MySQL workbench this query is getting timed out.
Is this query supposed to take a lot of time, given the fact that I have very large amount of data, if so what I need to do to make it successfully execute?
Note :- I have decided to switch to MyISAM because there will be more reads and very few writes on this table.

Maybe You can try first of all to create new MyISAM empty table and then insert data from another table?
I dont have so big table to test this in that scale, in my biggest table it takes almost the same.
It's important to look at indexes, because rebulding then sometimes takes sooo looonngggg.

Related

MySQL full-text search is slow as table grows

MySQL simple fulltext search is getting slower as table size grows.
When I run a query like below using fulltext index, it takes about 90 seconds to execute.
SELECT * FROM project_fulltext_indices WHERE match(search_text) against ('abcdefghijklmnopq') limit 1;
The tables have about 4G rows, and the size is about 9.4GB.
The table mainly contains source code(English).
It used to be much faster when the table is much smaller.
Is there any idea how to improve the performance ?
You can use the mysql indexes.
It is like a placing a bookmark in a book.
Create an index in the project_fulltext_indices
take note. avoid using mysql functions in querying a large data for faster result.
If I am correct mysql indexes doesn't working then mysql function is used.
I created the copy of table by creating the same schema, inserting all the rows, and creating the fullt-text index. The rename the copied table to original table.
After that, the speed of full-text search becomes 50ms from 90seconds.(more than 1000 times faster.)
I also tried to run "OPTIMIZE TABLE project_fulltext_indices", but it takes long time. I waited more than 1 hour, and gave up. And worse, while optimizing the table, the table looks being locked and the running web services stopped working.

How to reduce index size of a table in mysql using innodb engine?

I am facing a performance issue in mysql due to large index size on my table. Index size has grown to 6GB and my instance is running on 32GB memory. Majority of rows is not required in that table after a few hours and can be removed selectively. But removing them is a time consuming solution and doesn't reduce index size.
Please suggest some solution to manage this index.
You can optimize your table to rebuild index and get back space if not getting even after deletion-
optimize table table_name;
But as your table is bulky so it will lock during optimze table and also you are facing issue how can remove old data even you don't need few hours old data. So you can do as per below-
Step1: during night hours or when there is less traffic on your db, first rename your main table and create a new table with same name. Now insert few hours data from old table to new table.
By this you can remove unwanted data and also new table will be optimzed.
Step2: In future to avoid this issue, you can create a stored procedure. Which will will execute in night hours only 1 time per day and either delete till previous day (as per your requirement) data from this table or will move data to any historical table.
Step3: As now your table always keep only sigle day data then you can execute optimize table statement to rebuild and claim space back on this table easily.
Note: delete statement will not rebuild index and will not free space on server. For this you need to do optimize your table. It can be by various ways like by alter statement or by optimize statement etc.
If you can remove all the rows older than X hours, then PARTITIONing is the way to go. PARTITION BY RANGE on the hour and use DROP PARTITION to remove an old hour and REORGANIZE PARTITION to create a new hour. You should have X+2 partitions. More details.
If the deletes are more complex, please provide more details; perhaps we can come up with another solution that deals with the question about index size. Please include SHOW CREATE TABLE.
Even if you cannot use partitions for purging, it may be useful to have partitions for OPTIMIZE. Do not use OPTIMIZE PARTITION; it optimizes the entire table. Instead, use REORGANIZE PARTITION if you see you need to shrink the index.
How big is the table?
How big is innodb_buffer_pool_size?
(6GB index does not seem that bad, especially since you have 32GB of RAM.)

Updating MySQL Innodb Index Statistics

We have a large MySQL 5.5 database in which many rows are inserted daily and never deleted or updated. There are also users querying the live database. Tables are MyISAM.
But it is effectively impossible to run ANALYZE TABLES because it takes way too long. And so the query optimizer will often pick the wrong index. (15 hours, and sometimes crashes the tables.)
We want to try switching to all InnoDB. Will we need to run ANALYZE TABLES or not?
The MySQL docs say:
The cardinality (the number of different key values) in every index of a table
is calculated when a table is opened, at SHOW TABLE STATUS and ANALYZE TABLE and
on other circumstances (like when the table has changed too much).
But that begs the question: when is a table opened? If that means accessed during a connection then we need do nothing special. But I do not think that that is the case for InnoDB.
So what is the best approach? Run ANALYZE TABLE periodically? Perhaps with an increased dive count?
Or will it all happen automatically?
The query users use apps to get the data, so each run is a separate connection. They generally do NOT expect the rows to be up-to-date within just minutes.

Remove over 100,000 rows from mysql table - server crashes

I have a question when I try to remove over 100,000 rows from a mysql table the server freezes and non of its websites can be accessed anymore!
I waited 2 hours and then restarted the server and restored the account.
I used following query:
DELETE FROM `pligg_links` WHERE `link_id` > 10000
-
SELECT* FROM `pligg_links` WHERE `link_id` > 10000
works perfectly
Is there a better way to do this?
You could delete the rows in smaller sets. A quick script that deletes 1000 rows at a time should see you through.
"Delete from" can be very expensive for large data sets.
I recommend using partitioning.
This may be done slightly differently in PostgreSQL and MySQL, but in PostgreSQL you can create many tables that are "partitions" of the larger table or on a partition. Queries and whatnot can be run on the larger table. This can greatly increase the speed with which you can query given you partition correctly. Also, you can delete a partition by simply dropping it. This happens very very quickly because it is somewhat equivalent to dropping a table.
Documentation for table partitioning can be found here:
http://www.postgresql.org/docs/8.3/static/ddl-partitioning.html
Make sure you have an index on link_id column.
And try to delete with chunks like 10.000 in a time.
Deleting from table is very costy operation.

when to use OPTIMIZE in mysql

I have a database full of time-sensitive data, so on a daily basis I truncate the table and then import the new data (from a merge of other databases) into the truncated table.
Currently I am running OPTIMIZE on the table after I have imported the daily refresh of data.
However, looking at the mysql OPTIMIZE syntax page
http://dev.mysql.com/doc/refman/5.1/en/optimize-table.html
it says I can optimize to reclaim unused space and defrag the data.
So should I being running OPTIMIZE twice?
Once when I delete the data, and then again after I've reinserted it?
or just once?
and if just once, should it be after loading the new data?
or after clearing out the old?
it may depend upon whether you are using MyISAM or InnoDB tables, but i would run the OPTIMIZE after truncating the table. This should ensure space is reclaimed and it will run very quickly.
When you insert your batch of data it should all insert in order and not be fragmented anyway, and since it's a fresh insert there will be no space to reclaim. If it's a small dataset it may not matter too much, but on a large dataset doing the OPTIMIZE after the insert could also be quite slow.
Just once is fine, after you've imported the new data.
After deleteing or updateing a set of data into your database,you can use optimize table command to remove de-fragmented space .
there is no need to use optimize command two time.after all DML process you can
use optimize command.