What is the use of ANALYZE TABLE key word in mysql? - mysql

Can it used to repair mysql index. I am facing performance issues even for queries that use index

Depends on your version of MySQL and storage engine but in general:
OPTIMIZE TABLE Analyzes table, stores the key distribution for a table, reclaims the unused space and defragments the data file.
ANALYZE TABLE Only analyzes table and stores the key distribution.
https://dev.mysql.com/doc/refman/5.6/en/analyze-table.html

Related

Create index and then insert or insert and then create index?

I'm inserting a big volume of data in a table in Mysql, I need to create an index to access quickly to the data, however, I would like to know if there is a difference (in performance) between these scenarios:
Create an index and then insert all data
Insert all data and then create an index
thanks in advance!
For InnoDB storage engine, for the cluster index, it will be faster to specify the cluster index (i.e. PRIMARY KEY) on the table before inserting data.
This is because if a cluster index (PRIMARY KEY) is not defined on the table, then InnoDB will use a hidden 6-byte auto-incremented counter for the cluster index. If a PRIMARY KEY is later specified, the entire table will need to be rebuilt.
For secondary indexes (i.e. non-cluster indexes) with InnoDB, it is usually faster to insert data without secondary indexes defined, and then build the secondary indexes after the data is loaded.
FOLLOWUP
As far as the speed of loading to a table (in particular, a table that is truncated/emptied, and then reloaded), dropping and re-creating indexes is a well known technique for speeding up processing, not just with MySQL, but with other RDBMS such as Oracle.)
There isn't a guarantee that the processing will be faster; as with most things database, we need tests to determine which is faster.
For a table containing millions of rows, and we're adding a couple dozen hundred rows, then dropping and rebuilding indexes is likely going to be a lot slower, because of all of the extra work to re-index all of the existing rows. It would be faster to do the index maintenance while the rows are being inserted.
In terms of speeding up a load, the "drop and recreate indexes" technique isn't going to give us the kind of dramatic improvements we get from other changes. For example, it won't be anywhere near the improvement we would see by using LOAD DATA in place of INSERT statements, nor using multi-row INSERT statements vs a series of singleton INSERT statements.

Creating indexes prior to LOAD DATA for performance in MySQL

The Amazon RDS Customer Data Import Guide for MySQL (written in 2009) provides the following tip to decrease load times for MySQL -
Create all secondary indexes prior to loading. This is counterintuitive for those familiar with other databases. Adding or modifying a secondary index causes MySQL to create a new table with the index changes, copy the data from the existing table to the new table, and drop the original table.
However, there are several articles and stackoverflow posts from 2010+ that provide performance tests showing that creating indexes after loading is more performant. Where did this come from and did it just apply to an older version of MySQL? If so, please provide exact details. Or, does it still apply is specific cases?
The AWS recommendation to put secondary indexes in place before loading the data applied to older MySQL versions (< 5.5) because of the way secondary indexes were handled:
From the MySQL 5.5 docs:
Creating and dropping secondary indexes has traditionally involved
significant overhead from copying all the data in the InnoDB table.
The fast index creation feature of the InnoDB Plugin makes both CREATE
INDEX and DROP INDEX statements much faster for InnoDB secondary
indexes.
MySQL offers the following recommendation in the 5.5 documentation:
Because index maintenance can add performance overhead to many data
transfer operations, consider doing operations such as ALTER TABLE ...
ENGINE=INNODB or INSERT INTO ... SELECT * FROM ... without any
secondary indexes in place, and creating the indexes afterward.
If you use MySQL 5.5 or higher with AWS, you can take advantage of the fast Fast Index Creation feature that significantly speeds up secondary indexes creation.
Fast Index Creation is a capability first introduced in the InnoDB Plugin, now part of the MySQL server in 5.5 and higher, that speeds up
creation of InnoDB secondary indexes by avoiding the need to
completely rewrite the associated table. The speedup applies to
dropping secondary indexes also.

Most common and important SQL commands that solely apply to MyIsam storage engine?

It has been recently come up into one of our discussions that moving an old legacy system using old MyISAM based MySQL deployment can't be easily replaced by an InnoDB based MySQL or MariaDB deployment. The reason that came up was that there were too many MyISAM only SQL commands all over the place. I haven't seen the code yet so I'm wondering what SQL commands where they referring to.
I only know of SEVERAL like below which are associated with table locking. It will probably work with InnoDB still in theory, but more appropriate for MyISAM , MERGE, and MEMORY storage engines which support table locking.
LOCK TABLES
UNLOCK TABLES
If there are more, or point me to a collection of it. It will be highly appreciated.
--edit--
I'll put everything else I find below this line.
MATCH (http://dev.mysql.com/doc/refman/5.5/en//fulltext-search.html)
You can LOCK TABLES for an InnoDB table too, so that's not MyISAM-specific. Though it's unnecessary to lock InnoDB tables. It's preferable to use transactions, MVCC, and SELECT...FOR UPDATE.
There are a number of configuration variables and status variables that are relevant only for MyISAM, such as key_buffer_size to dedicate some memory to caching indexes. But these are not commands.
A couple of features of MyISAM tables aren't supported by InnoDB. One is grouped auto-increment primary keys:
CREATE TABLE foo (
group_id INT,
position INT AUTO_INCREMENT,
PRIMARY KEY (group_id, position)
);
The table above increments position as you insert rows, but starts over at 1 for each distinct value of group_id. This works only in MyISAM.
CREATE FULLTEXT INDEX, and hence the MATCH()...AGAINST() query predicate are currently supported only in MyISAM. But these are being implemented for InnoDB in MySQL 5.6.
CREATE SPATIAL INDEX is supported only in MyISAM.
CHECKSUM TABLE applies only to MyISAM tables.
OPTIMIZE TABLE is in some ways specific to MyISAM, but when you run this command against an InnoDB table, it's automatically translated to a recreate + analyze operation.
CREATE TABLE options that are supported only by MyISAM:
AVG_ROW_LENGTH=nnn
DATA_DIRECTORY=path
INDEX_DIRECTORY=path
DELAY_KEY_WRITE=1
PACK_KEYS=1
ROW_FORMAT=FIXED
The MERGE storage engine can merge only MyISAM tables.
My favorite command to apply to a MyISAM table is the following. :-)
ALTER TABLE tablename ENGINE=InnoDB;
I prefer create a "temporary" table, insert/update and delete, drop the old table and than rename the new table to the old name.
otherwise you can in the last step
TRUNCATE TABLE x;
INSERT INTO x SELECT * from temp_x;

Foreign keys for myISAM and InnoDB tables

I have a DB table that is myISAM, used for fulltext searching. I also have a table that is InnoDB. I have a column in my myISAM table that I want to match with a column in my InnoDB table. Can that be done? I cant seem to work it out!
http://dev.mysql.com/doc/refman/5.0/en/innodb-foreign-key-constraints.html
Foreign keys definitions are subject to the following conditions:
Both tables must be InnoDB tables and they must not be TEMPORARY tables.
So, I'm afraid you wont be able to achieve what you want done.
I would recommend altering your DB architecture such that you have one set of tables designed with data integrity for writing (all InnoDB), and a second set designed for search - possibly on a different box, and possibly not even using MySQL, but maybe a search server like Solr or Sphinx, which should outperform a fulltext MySQL table. You could then populate your search DB periodically from your write DB.

MySQL change engine on tables with Foreign Keys

So, in the process of creating our tables, we weren't paying close enough attention to our system and all of the tables were created with the InnoDB engine. This is really only bad because we want to have a FULLTEXT index on a few of the columns.
So, now I want to convert. And while I'm at it, I just want to convert all the tables to MyISAM so that if we ever add columns in the future that we want to index, we have that option. So I've got my .sql file with the following:
ALTER TABLE tableName1 Engine = MyISAM;
ALTER TABLE tableName2 Engine = MyISAM;
However, when I try to run it, I get the following error:
Error Code: 1217 Cannot delete or update a parent row: a foreign key constraint fails
As you might have guessed, we have foreign keys in our tables. Not my style, but also not my department, nor my creation script.
My question boils down to, is there anyway for me to change the engine on these tables without having to wipe the DB?
Edit: Note that this will need to be done on multiple development and test copies of the database, so something I can script would definitely be preferred.
Well, to my knowledge, sort of but not really. mysqldump the database and edit out the foreign key constraints in the dumped sql file. And of course change the engine in the CREATE TABLE script.
InnoDB unlike MyISAM support foreign keys and has lots of great features like transactional system that ensures integrity across all tables. MyISAM tables tend to fail now and then when you have large data in tables or for many other reasons.
In the near future InnoDB will implement FullText search. I recommend not to change tables' engine but have something like Sphinx in place. Sphinx is much more powerful and much more flexible than Fulltext Search which works for InnoDB.
More about fulltext search in InnoDB:
InnoDB Fulltext search