I have an Invoice table:
CREATE TABLE Invoice (
id INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
countryCode CHAR(2) NOT NULL,
number INT(10) UNSIGNED NOT NULL,
...
PRIMARY KEY(id),
UNIQUE KEY(countryCode ASC, number ASC)
);
Each country must have its own sequential invoice numbering, so before creating a new invoice, I have to get the next sequence number for this particular country using the following query:
SELECT MAX(number) + 1
FROM Invoice
WHERE countryCode = 'XX';
I understand that this query will use the index.
Is there any incidence of the index sort order (number ASC or number DESC) on the performance of this query?
I think you have pretty much optimal index utilization since countryCode is first part of your key and the column you are getting max value for is second part of key. No need for sort here at all. Check out this exceprt from MySQL documentation on index utilization
http://dev.mysql.com/doc/refman/5.5/en/mysql-indexes.html
To find the MIN() or MAX() value for a specific indexed column
key_col. This is optimized by a preprocessor that checks whether you
are using WHERE key_part_N = constant on all key parts that occur
before key_col in the index. In this case, MySQL does a single key
lookup for each MIN() or MAX() expression and replaces it with a
constant. If all expressions are replaced with constants, the query
returns at once. For example:
SELECT MIN(key_part2),MAX(key_part2) FROM tbl_name WHERE
key_part1=10;
From http://dev.mysql.com/doc/refman/5.1/en/create-table.html
An index_col_name specification can end with ASC or DESC. These keywords are permitted for future extensions for specifying ascending or descending index value storage. Currently, they are parsed but ignored; index values are always stored in ascending order.
I believe that with your statement, a key will only be created on countryCode and not number.
There will be small performance difference for Selectstatements, but bigger difference for Insert, Update and Delete statements.
When you insert in the middle of a full page, then DB engine should move some rows to next page. If you have DESC then every INSERT should move index rows over pages. Same for DELETE ones.
Updatesare not so affected, but if you change index values same thing applies.
Related
Step 1:
I am creating a simple table.
CREATE TABLE `indexs`.`table_one` (
`id` INT NOT NULL AUTO_INCREMENT,
`name` VARCHAR(45) NULL,
PRIMARY KEY (`id`));
Step 2:
I make two inserts into this table.
insert into table_one (name) values ("B");
insert into table_one (name) values ("A");
Step 3:
I make a select, I get a table, the records in which are ordered by id.
SELECT * FROM table_one;
This is the expected result, because in mysql the primary key is a clustered index, therefore the data will be physically ordered by it.
Now the part I don't understand.
Step 4:
I am creating an index on the name column.
CREATE INDEX index_name ON table_one(name)
I repeat step 3 again, but I get a different result. The lines are now ordered according to the name column.
Why is this happening? why the order of the rows in the table changes in accordance with the new index on the name column, because as far as I understand, in mysql, the primary key is the only clustered index, and all indexes created additionally are secondary.
I make a select, I get a table, the records in which are ordered by id. [...] This is the expected result, because in mysql the primary key is a clustered index, therefore the data will be physically ordered by it.
There is some misunderstanding of a concept here.
Table rows have no inherent ordering: they represent unordered set of rows. While the clustered index enforces a physical ordering of data in storage, it does not guarantee the order in which rows are returned by a select query.
If you want the results of the query to be ordered, then use an order by clause. Without such clause, the ordering or the rows is undefined: the database is free to return results in whichever order it likes, and results are not guaranteed to be consistent over consecutive executions of the same query.
select * from table_one order by id;
select * from table_one order by name;
(GMB explains most)
Why is this happening? why the order of the rows in the table changes in accordance with the new index on the name column
Use EXPLAIN SELECT ... -- it might give a clue of what I am about to suggest.
You added INDEX(name). In InnoDB, the PRIMARY KEY column(s) are tacked onto the end of each secondary index. So it is effectively a BTree ordered by (name,id) and containing only those columns.
Now, the Optimizer is free to fetch the data from the index, since it has everything you asked for (id and name). (This index is called "covering".)
Since you did not specify an ORDER BY, the result set ordering is valid (see GMB's discussion).
Moral of the story: If you want an ordering, specify ORDER BY. (The Optimizer is smart enough to "do no extra work" if it can see how to provide the data without doing a sort.
Further experiment: Add another column to the table but don't change the indexes. Now you will find SELECT * FROM t is ordered differently than SELECT id, name FROM t. I think I have given you enough clues to predict this difference, if not, ask.
I have a table meta with the following structure (this is just an example denormalized data)
`id` int(3) not null auto_increment primary key,
`category_id` int(3),
`subdomain` varchar(191),
`created_at` timestamp,
`updated_at` timestamp
The subdomain field could store unique values and repeating values like 'general' can be repeated many times
Situation 1
Also i have an index subdomain. This index applied on query
Select `id` from `table` where `subdomain` = 'general'
But when i try to get some non-indexed field, mysql scans all table and index is not used
Select `created_at` from `table` where `subdomain` = 'general'
As i know, Inno-db non-clustered index stores a reference to a row and there is no need to perform linear search over all rows to retrieve some field.
Also i know optimizer can choose an unexpected plan for human, but what the reasons can be in this case?
No matter how much data in the table, result always the same.
This can happen, when the filtering backed by the index is not very selective/your value to filter for has a high cardinality. This means a high percentage of your total rows match the where-condition supported by the index (e.g. 90% of your rows match subdomain = 'general'). If you use the index under that condition you end up processing more data compared to a full table scan.
Example: you have 100 rows and 90 of them match subdomain = 'general'.
A full table scan needs to access all 100 rows to check the conditaion and 90 values are read for the result.
An index backed select needs to access 90 items in the index fo fulfill the condition and follow the pointer from the index to the actual row to select the not indexed value from that row. Ending up in 90 lookups on the index + 90 reads from the rows = 180 operations. This is slower than the full table scan where you just access some rows more than needed. The operations might not have the same cost, but you end up doing more work in the end.
Suppose the table T has three columns,
id int not null auto_increment,
my_id int,
name varchar(200).
and the query is "select * from T where my_id in (var_1, var_2, ..., var_n) and name = 'name_var'".
Is there any performance difference between below two indices?
Index1: (my_id, name)
Index2: (name, my_id).
•Index1: (my_id, name)
•Index2: (name, my_id).
Yes, above two would slightly differ when it comes to query performance.
Always, the leftmost fields are the most important in determining the efficiency and selectivity of an index.
index should be built on the column(s) which are frequently used in the WHERE, ORDER BY, and GROUP BY clauses.
Hope this helps!
In a composite index, the column to be searched should appear first. So, if you are searching for a set of id values, you'll want id to show up first in the index.
But if id is the primary key and you're using a SELECT * clause to retrieve the whole row, it doesn't make sense to add another index. The way tables are organized, all the data of the row appears clustered with each id value. So just use the index on the primary key.
tl;dr: neither (id,name) nor (name,id) will help this query.
In general, it is best to start the INDEX with the WHERE clauses with col = const. One range can come last. IN is sort of like =, sort like a range. Hence, this is best:
INDEX(name, id)
Think of it this way. The index is an ordered list. With this index, it will start at the name=... and then have to scan or leapfrog through all the ids with that name.
I suspect the PRIMARY KEY is not (id). If it were, why would you be checking the name?
Query
SELECT id FROM `user_tmp`
WHERE `code` = '9s5xs1sy'
AND `go` NOT REGEXP 'http://www.xxxx.example.com/aflam/|http://xx.example.com|http://www.xxxxx..example.com/aflam/|http://www.xxxxxx.example.com/v/|http://www.xxxxxx.example.com/vb/'
AND check='done'
AND `dataip` <1319992460
ORDER BY id DESC
LIMIT 50
MySQL returns:
Showing rows 0 - 29 ( 50 total, Query took 21.3102 sec) [id: 2622270 - 2602288]
Query took 21.3102 sec
if i remove
AND dataip <1319992460
MySQL returns
Showing rows 0 - 29 ( 50 total, Query took 0.0859 sec) [id: 3637556 - 3627005]
Query took 0.0859 sec
and if no data, MySQL returns
MySQL returned an empty result set (i.e. zero rows). ( Query took 21.7332 sec )
Query took 21.7332 sec
Explain plan:
SQL query: Explain SELECT * FROM `user_tmp` WHERE `code` = '93mhco3s5y' AND `too` NOT REGEXP 'http://www.10neen.com/aflam/|http://3ltool.com|http://www.10neen.com/aflam/|http://www.10neen.com/v/|http://www.m1-w3d.com/vb/' and checkopen='2010' and `dataip` <1319992460 ORDER BY id DESC LIMIT 50;
Rows: 1
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE user_tmp index NULL PRIMARY 4 NULL 50 Using where
Example of the database used
CREATE TABLE IF NOT EXISTS user_tmp ( id int(9) NOT NULL
AUTO_INCREMENT, ip text NOT NULL, dataip bigint(20) NOT NULL,
ref text NOT NULL, click int(20) NOT NULL, code text NOT
NULL, too text NOT NULL, name text NOT NULL, checkopen
text NOT NULL, contry text NOT NULL, vOperation text NOT NULL,
vBrowser text NOT NULL, iconOperation text NOT NULL,
iconBrowser text NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=4653425 ;
--
-- Dumping data for table user_tmp
INSERT INTO `user_tmp` (`id`, `ip`, `dataip`, `ref`, `click`, `code`, `too`, `name`, `checkopen`, `contry`, `vOperation`, `vBrowser`, `iconOperation`, `iconBrowser`) VALUES
(1, '54.125.78.84', 1319506641, 'http://xxxx.example.com/vb/showthread.php%D8%AA%D8%AD%D9%85%D9%8A%D9%84-%D8%A7%D8%BA%D9%86%D9%8A%D8%A9-%D8%A7%D9%84%D8%A8%D9%88%D9%85-giovanni-marradi-lovers-rendezvous-3cd-1999-a-155712.html', 0, '4mxxxxx5', 'http://www.xxx.example.com/aflam/', 'xxxxe', '2010', 'US', 'Linux', 'Chrome 12.0.742 ', 'linux.png', 'chrome.png');
I want the correct way to do the query and optimize database
You don't have any indexes besides the primary key. You need to make index on fields that you use in your WHERE statement. If you need to index only 1 field or a combination of several fields depends on the other SELECTs you will be running against that table.
Keep in mind that REGEXP cannot use indexes at all, LIKE can use index only when it does not begin with wildcard (so LIKE 'a%' can use index, but LIKE '%a' cannot), bigger than / smaller than (<>) usually don't use indexes also.
So you are left with the code and check fields. I suppose many rows will have the same value for check, so I would begin the index with code field. Multi-field indexes can be used only in the order in which they are defined...
Imagine index created for fields code, check. This index can be used in your query (where the WHERE clause contains both fields), also in the query with only code field, but not in query with only check field.
Is it important to ORDER BY id? If not, leave it out, it will prevent the sort pass and your query will finish faster.
I will assume you are using mysql <= 5.1
The answers above fall into two basic categories:
1. You are using the wrong column type
2. You need indexes
I will deal with each as both are relevant for performance which is ultimately what I take your questions to be about:
Column Types
The difference between bigint/int or int/char for the dataip question is basically not relevant to your issue. The fundamental issue has more to do with index strategy. However when considering performance holistically, the fact that you are using MyISAM as your engine for this table leads me to ask if you really need "text" column types. If you have short (less than 255 say) character columns, then making them fixed length columns will most likely increase performance. Keep in mind that if any one column is of variable length (varchar, text, etc) then this is not worth changing any of them.
Vertical Partitioning
The fact to keep in mind here is that even though you are only requesting the id column from the standpoint of disk IO and memory you are getting the entire row back. Since so many of the rows are text, this could mean a massive amount of data. Any of these rows that are not used for lookups of users or are not often accessed could be moved into another table where the foreign key has a unique key placed on it keeping the relationship 1:1.
Index Strategy
Most likely the problem is simply indexing as is noted above. The reason that your current situation is caused by adding the "AND dataip <1319992460" condition is that it forces a full table scan.
As stated above placing all the columns in the where clause in a single, composite index will help. The order of the columns in the index will no matter so long as all of them appear in the where clause.
However, the order could matter a great deal for other queries. A quick example would be an index made of (colA, colB). A query with "where colA = 'foo'" will use this index. But a query with "where colB = 'bar'" will not because colB is not the left most column in the index definition. So, if you have other queries that use these columns in some combination it is worth minimizing the number of indexes created on the table. This is b/c every index increases the cost of a write and uses disk space. Writes are expensive b/c of necessary disk activity. Don't make them more expensive.
You need to add index like this:
ALTER TABLE `user_tmp` ADD INDEX(`dataip`);
And if your column 'dataip' contains only unique values you can add unique key like this:
ALTER TABLE `user_tmp` ADD UNIQUE(`dataip`);
Keep in mind, that adding index can take long time on a big table, so don't do it on production server with out testing.
You need to create index on fields in the same order that that are using in where clause. Otherwise index is not be used. Index fields of your where clause.
does dataip really need to be a bigint? According to mysql The signed range is -9223372036854775808 to 9223372036854775807 ( it is a 64bit number ).
You need to choose the right column type for the job, and add the right type of index too. Else these queries will take forever.
I'm really struggling to get a query time down, its currently having to query 2.5 million rows and it takes over 20 seconds
here is the query
SELECT play_date AS date, COUNT(DISTINCT(email)) AS count
FROM log
WHERE play_date BETWEEN '2009-02-23' AND '2020-01-01'
AND type = 'play'
GROUP BY play_date
ORDER BY play_date desc;
`id` int(11) NOT NULL auto_increment,
`instance` varchar(255) NOT NULL,
`email` varchar(255) NOT NULL,
`type` enum('play','claim','friend','email') NOT NULL,
`result` enum('win','win-small','lose','none') NOT NULL,
`timestamp` timestamp NOT NULL default CURRENT_TIMESTAMP,
`play_date` date NOT NULL,
`email_refer` varchar(255) NOT NULL,
`remote_addr` varchar(15) NOT NULL,
PRIMARY KEY (`id`),
KEY `email` (`email`),
KEY `result` (`result`),
KEY `timestamp` (`timestamp`),
KEY `email_refer` (`email_refer`),
KEY `type_2` (`type`,`timestamp`),
KEY `type_4` (`type`,`play_date`),
KEY `type_result` (`type`,`play_date`,`result`)
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE log ref type_2,type_4,type_result type_4 1 const 270404 Using where
The query is using the type_4 index.
Does anyone know how I could speed this query up?
Thanks
Tom
That's relatively good, already. The performance sink is that the query has to compare 270404 varchars for equality for the COUNT(DISTINCT(email)), meaning that 270404 rows have to be read.
You could be able to make the count faster by creating a covering index. This means that the actual rows do not need to be read because all the required information is present in the index itself.
To do this, change the index as follows:
KEY `type_4` (`type`,`play_date`, `email`)
I would be surprised if that wouldn't speed things up quite a bit.
(Thanks to MarkR for the proper term.)
Your indexing is probably as good as you can get it. You have a compound index on the 2 columns in your where clause and the explain you posted indicates that it is being used. Unfortunately, there are 270,404 rows that match the criteria in your where clause and they all need to be considered. Also, you're not returning unnecessary rows in your select list.
My advice would be to aggregate the data daily (or hourly or whatever makes sense) and cache the results. That way you can access slightly stale data instantly. Hopefully this is acceptable for your purposes.
Try an index on play_date, type (same as type_4, just reversed fields) and see if that helps
There are 4 possible types, and I assume 100's of possible dates. If the query uses the type, play_date index, it basically (not 100% accurate, but general idea) says.
(A) Find all the Play records (about 25% of the file)
(B) Now within that subset, find all of the requested dates
By reversing the index, the approach is
> (A) Find all the dates within range
> (Maybe 1-2% of file) (B) Now find all
> PLAY types within that smaller portion
> of the file
Hope this helps
Extracting email to separate table should be a good performance boost since counting distinct varchar fields should take awhile. Other than that - the correct index is used and the query itself is as optimized as it could be (except for the email, of course).
The COUNT(DISTINCT(email)) part is the bit that's killing you. If you only truly need the first 2000 results of 270,404, perhaps it would help to do the email count only for the results instead of for the whole set.
SELECT date, COUNT(DISTINCT(email)) AS count
FROM log,
(
SELECT play_date AS date
FROM log
WHERE play_date BETWEEN '2009-02-23' AND '2020-01-01'
AND type = 'play'
ORDER BY play_date desc
LIMIT 2000
) AS shortlist
WHERE shortlist.id = log.id
GROUP BY date
Try creating an index only on play_date.
Long term, I would recommend building a summary table with a primary key of play_date and count of distinct emails.
Depending on how up to date you need it to be - either allow it to be updated daily (by play_date) or live via a trigger on the log table.
There is a good chance a table scan will be quicker than random access to over 200,000 rows:
SELECT ... FROM log IGNORE INDEX (type_2,type_4,type_result) ...
Also, for large grouped queries you may see better performance by forcing a file sort rather than a hashtable-based group (since if this turns out to need more than tmp_table_size or max_heap_table_size performance collapses):
SELECT SQL_BIG_RESULT ...