when to use OPTIMIZE in mysql - mysql

I have a database full of time-sensitive data, so on a daily basis I truncate the table and then import the new data (from a merge of other databases) into the truncated table.
Currently I am running OPTIMIZE on the table after I have imported the daily refresh of data.
However, looking at the mysql OPTIMIZE syntax page
http://dev.mysql.com/doc/refman/5.1/en/optimize-table.html
it says I can optimize to reclaim unused space and defrag the data.
So should I being running OPTIMIZE twice?
Once when I delete the data, and then again after I've reinserted it?
or just once?
and if just once, should it be after loading the new data?
or after clearing out the old?

it may depend upon whether you are using MyISAM or InnoDB tables, but i would run the OPTIMIZE after truncating the table. This should ensure space is reclaimed and it will run very quickly.
When you insert your batch of data it should all insert in order and not be fragmented anyway, and since it's a fresh insert there will be no space to reclaim. If it's a small dataset it may not matter too much, but on a large dataset doing the OPTIMIZE after the insert could also be quite slow.

Just once is fine, after you've imported the new data.

After deleteing or updateing a set of data into your database,you can use optimize table command to remove de-fragmented space .
there is no need to use optimize command two time.after all DML process you can
use optimize command.

Related

INDEX Creation Optimization

I have a table with 30M rows and one of the columns I need to create an INDEX on. What would be the fastest way to do this? Two options I have considered is truncating the table, adding the index, and then re-importing the SQL as a csv file. The other would be the ALTER TABLE statement.
What should I do for the fastest performance?
The fastest way would be to use ALTER TABLE. If you truncate, alter and re-import then you will have to wait while the truncate runs, then while the re-import runs which will be added to by building the new index. With just the ALTER the only time will be building the new index, so it skips the truncate and import time.
However, with 30M rows, building the index could take some time and may timeout. If this happens you will need to increase the timeouts somehow (I don't use MySQL so I can't tell you how). If this doesn't work, you may have no choice but to do the truncate and re-import route, hopefully using some sort of Bulk upload.

Mysql insert,updates very slow

Our server database is in mysql 5.1
we have 754 tables in our db.We create a table for each project. Hence the large no of tables.
From past one week i have noticed a very long delay in inserts and updates to any table.If i create a new table and insert into it,It takes one min to insert around 300 recs.
Where as our test database in the same server has 597 tables Same insertion is very fast in test db.
Default engine is MYISAM. But we have few tables in INNODB .
There were a few triggers running. After i deleted triggers it has become some what faster. But it is not fast enough.
USE DESCRIBE to know your query execution plans.
Look more at http://dev.mysql.com/doc/refman/5.1/en/explain.html for its usage.
As #swapnesh mentions, the DESCRIBE command is very usefull for performance debugging.
You can also check your installation for issues using:
https://raw.github.com/rackerhacker/MySQLTuner-perl/master/mysqltuner.pl
You use it like this:
wget https://raw.github.com/rackerhacker/MySQLTuner-perl/master/mysqltuner.pl
chmod +x mysqltuner.pl
./mysqltuner.pl
Of course, here I am assuming that you run some kind of a Unix based system.
You can use OPTIMIZE. According to Manual it does the following:
Reorganizes the physical storage of table data and associated index
data, to reduce storage space and improve I/O efficiency when
accessing the table. The exact changes made to each table depend on
the storage engine used by that table
The syntax is:
OPTIMIZE TABLE tablename
Inserts are typically faster when made in bulk rather than one by one. Try inserting 10, 30, or 100 records per statement.
If you use jdbc you may be able to achieve the same effect with batching, without changing the SQL.

MySQL insert query cannot finish

I am inserting part of a large table to a new MyISAM table. I tried both command line and phpmyadmin, both take a long time. But I find in the mysql data folder, the table file actually has GB of data, but in phpmyadmin, it shows there is no record. Then I "check" the table, and it takes like forever...
What is wrong here? Should I change to innoDB?
Do you have indicies defined on your table? If you're most interested in inserting a lot of data quickly, you could consider dropping the indicies, doing the insert, and then re-adding the indicies. It won't be any faster overall (in fact the manual intervention would likely make the overall operation slower), but it would give you more direct visibility into how long the data insertion is taking versus the indexing that follows.

MySQL locking processing large LOAD DATA INFILE when live SELECT queries still needed

Looking for some help and advice please from Super Guru MySQL/PHP pros who can spare a moment of their time.
I have a web application in PHP/MySQL which has grown over the years and gets alot of searches on it. Its hitting bottlenecks now when the various daily data dumps of new rows get processed using MySQL LOAD DATA INFILE.
Its a large MyISAM table with about 1.5 million rows and all the SELECT queries occur on it. When these take place during the LOAD DATA INFILE of about 600k rows (and deletion of out dated data) they just get backed up and take about 30+ minutes to be freed up making any of those searches fruitless.
I need to come up with a way to get that table updated while retaining the ability to provide SELECT results in a reasonable timeframe.
Im completely out of ideas and have not been able to come up with a solution myself as its the first time ive encountered this sort of issue.
Any helpful advice, solutions or pointers from similar past experiences would be greatly appreciated as I would love to learn to resolve this sort of problem.
Many thanks everyone for your time! J
You can use the CONCURRENT keywords for LOAD DATA INFILE. This way, when you load the data, the table is still able to server SELECTs.
Concerning the delete, this is more complicated. I would personally add a column called 'status' INT(1), who will define if the line is active or not( = deleted), and then partition my table with a rule based on this column status.
This way, it will be easier to delete all rows where status=0 :P I haven;t tested this last solution, I may do that in a near future.
The CONCURRENT keywords will work if your table is optimized. If there is any FREE_SPACE, then the LOAD DATA INFILE will lock the table.
MyISAM doesn't support row-level locking, so operations like mysqldump are forced to lock the entire table to guarantee a consistent dump. Your only practical options are to switch to another table like (like InnoDB) that supports row-level locking, and/or split your dump up into smaller pieces. The small dumps will still lock the table while they're dumping/reloading, but the lock periods would be shorter.
A hairier option would be to have "live" and "backup" tables. Do the dump/load operations on the backup table. When they're copmlete, swap it out for the live table (rename tables, or have your code dynamically change which table they're using).. If you can live with a short window of potential stale data, this could be a better option.
You should switch your table storage engine from MyISAM to InnoDB. InnoDB provides row-locking (as opposed to MyISAM's table-locking) meaning while one query is busy updating or inserting a row, another query can update a different row at the same time.

How to improve MySQL INSERT and UPDATE performance?

Performance of INSERT and UPDATE statements in our database seems to be degrading and causing poor performance in our web app.
Tables are InnoDB and the application uses transactions. Are there any easy tweaks that I can make to speed things up?
I think we might be seeing some locking issues, how can I find out?
You could change the settings to speed InnoDB inserts up.
And even more ways to speed up InnoDB
...and one more optimization article
INSERT and UPDATE get progressively slower when the number of rows increases on a table with an index. Innodb tables are even slower than MyISAM tables for inserts and the delayed key write option is not available.
The most effective way to speed things up would be to save the data first into a flat file and then do LOAD DATA , this is about 20x faster.
The second option would be create a temporary in memory table, load the data into it and then do a INSERT INTO SELECT in batches. That is once you have about 100 rows in your temp table, load them into the permanent one.
Additionally you can get an small improvement in speed by moving the Index file into a separate physical hard drive from the one where the data file is stored. Also try to move any bin logs into a different device. Same applies for the temporary file location.
I would try setting your tables to delay index updates.
ALTER TABLE {name} delay_key_write='1'
If you are not using indexes, they can help improve performance of update queries.
I would not look at locking/blocking unless the number of concurrent users have been increasing over time.
If the performance gradually degraded over time I would look at the query plans with the EXPLAIN statement.
It would be helpful to have the results of these from the development or initial production environment, for comparison purposes.
Dropping or adding an index may be needed,
or some other maintenance action specified in other posts.