Very slow query in ORDER BY with larger LIMIT range - mysql

MySQL 5.6, 64-bit, RHEL 5.8
A query on a large table with ORDER BY and LIMIT 'row_count' (or LIMIT 0,'row_count'). If the 'row_count' is larger then real count of result set, will be very very slow.
case 1: The query below is very fast (No 'LIMIT'):
mysql> SELECT * FROM syslog WHERE
(ReportedTime BETWEEN '2013-11-04' AND '2013-11-05') AND
Priority<3 AND Facility=1 ORDER BY id DESC;
+---
| ...
6 rows in set (0.01 sec)
case 2: The query below is also fast ('LIMIT 5'):
mysql> SELECT * FROM syslog WHERE
(ReportedTime BETWEEN '2013-11-04' AND '2013-11-05') AND
Priority<3 AND Facility=1 ORDER BY id DESC LIMIT 5;
+---
| ...
5 rows in set (0.42 sec)
case 3: The query below is very very slow ('LIMIT 7', may use any 'row_count' value > 6):
mysql> SELECT * FROM syslog WHERE
(ReportedTime BETWEEN '2013-11-04' AND '2013-11-05') AND
Priority<3 AND Facility=1 ORDER BY id DESC LIMIT 7;
+---
| ...
6 rows in set (28 min 7.24 sec)
Difference is just only individual (No LIMIT), "LIMIT 5", and "LIMIT 7".
Why is case 3 so slow?
Some investigations in the case 3:
Run command 'SHOW PROCESS', the State of the query is kept in 'Sending data'
Checked the server memory, it's still available enough.
Extended SESSION buffers 'read_buffer_size','read_rnd_buffer_size','sort_buffer_size' to very large amount (to 16MB) right before running query, but no help.
Also query only the column 'id' (SELECT id FROM syslog ....), but the same result.
During the query is running, raised the same query but with row_count<5 (eg. 'LIMIT 5') in another mysql connection, the return of latter is still very soon.
With different condition, for example, extend the time range BETWEEN '2013-10-03' to '2013-11-05' to gain result row count 149. With LIMIT 140, it's fast. With LIMIT 150, it's very very slow. So strange.
Currently in practice, in our website, the program gets the real result row count first (SELECT COUNT(*) FROM ..., No ORDER BY, No LIMIT), and afterwards do the query with the LIMIT 'row_count' value not exceeding the real row count got just now. Ugly.
The EXPLAIN for case 3:
-+-----..-+----..+-------+-----..+--------+---------+-----+-----+------------+
| sele.. | table| type | poss..| key | key_len | ref | rows| Extra |
-+-----..-+----..+-------+-----..+--------+---------+-----+-----+------------+
| SIMPLE | syslo| index | ... | PRIMARY| 8 | NULL| 132 | Using where|
-+-----..-+----..+-------+-----..+--------+---------+-----+-----+------------+
1 row in set (0.00 sec)
Table definition:
CREATE TABLE syslog (
id BIGINT NOT NULL AUTO_INCREMENT,
ReceivedAt TIMESTAMP NOT NULL DEFAULT 0,
ReportedTime TIMESTAMP NOT NULL DEFAULT 0,
Priority SMALLINT,
Facility SMALLINT,
FromHost VARCHAR(60),
Message TEXT,
InfoUnitID INT NOT NULL DEFAULT 0,
SysLogTag VARCHAR(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY idx_ReportedTime_Priority_id (ReportedTime,Priority,id),
KEY idx_Facility (Facility),
KEY idx_SysLogTag (SysLogTag(16)),
KEY idx_FromHost (FromHost(16))
);

Mysql is famous for its behaviour around ORDER BY DESC + LIMIT clause.
See: http://www.mysqlperformanceblog.com/2006/09/01/order-by-limit-performance-optimization/
Please try:
SELECT *
FROM syslog FORCE INDEX (Facility)
WHERE
ReportedTime BETWEEN '2013-11-04' AND '2013-11-05'
AND Priority<3
AND Facility=1
ORDER BY id DESC
LIMIT 7;
You need to force the use of the index used in first queries. (get it from their explain plans, KEY column)

Related

mysql paged select from big table sorted by random-data index

I know solution when you can sort table by some unique index
SELECT user_id, external_id, name, metadata, date_created
FROM users
WHERE user_id > 51234123
ORDER BY user_id ASC
LIMIT 10000;
but in my case, I want to sort table by some index, which have random data
CREATE TABLE `t` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`sorter` bigint(20) NOT NULL,
`data1` varchar(200) NOT NULL,
`data2` varchar(200) NOT NULL,
`data3` varchar(200) NOT NULL,
`data4` int(11) NOT NULL,
PRIMARY KEY (`id`),
KEY `sorter` (`sorter`),
KEY `id` (`id`,`sorter`),
KEY `sorter_2` (`sorter`,`id`)
) ENGINE=MyISAM AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
for ($i = 0; $i < 2e6; $i++)
$db->query("INSERT INTO `t` (`sorter`, `data1`, `data2`, `data3`, `data4`) VALUES (rand()*3e17, rand(), rand(), rand(), rand())");
for ($i = 0; $i < 1e6; $i++)
$db->query("INSERT INTO `t` (`sorter`, `data1`, `data2`, `data3`, `data4`) VALUES (0, rand(), rand(), rand(), rand())");
solution 1:
for ($i = 0; $i < $maxId; $i += $step)
select * from t
where id>=$i
order by sorter
limit $step
select * from t order by sorter limit 512123, 10000;
10000 rows in set (9.22 sec)
select * from t order by sorter limit 512123, 1000;
1000 rows in set (6.25 sec)
+------+-------------+-------+------+---------------+------+---------+------+---------+----------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+-------+------+---------------+------+---------+------+---------+----------------+
| 1 | SIMPLE | t | ALL | NULL | NULL | NULL | NULL | 3000000 | Using filesort |
+------+-------------+-------+------+---------------+------+---------+------+---------+----------------+
solution 2:
select id from t order by sorter limit 1512123, 10000;
+------+-------------+-------+-------+---------------+----------+---------+------+---------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+-------+-------+---------------+----------+---------+------+---------+-------------+
| 1 | SIMPLE | t | index | NULL | sorter_2 | 16 | NULL | 1522123 | Using index |
+------+-------------+-------+-------+---------------+----------+---------+------+---------+-------------+
10000 rows in set (0.74 sec)
0.74 sounds good, but for all table it takes 0.74*3000e3/10e3/60 = more than 3 minutes, and its only for gathering ids
Using OFFSET is not as efficient as you might think. With LIMIT 1512123, 10000, 1512123 rows must be stepped over. The bigger that number, the slower the query runs.
To explain the difference in the EXPLAINs...
'Solution 1' uses SELECT *; you don't have a covering index for it. So, there are two ways the query might be run:
(it did this): Scan 'ALL' the table, collecting all the columns (*); sort; skip over 512123 rows; and deliver 10000 or 1000 rows.
(a small OFFSET and LIMIT might lead to this): Inside the BTree for INDEX(sorter, id) skip over the OFFSET rows; grab the LIMIT rows; for each grabbed row in the index, reach over into the data file using the byte offset (note: You are using MyISAM, not InnoDB) to find the row; grab * and deliver it. No sort needed.
Unfortunately, the Optimizer does not have enough statistics, nor enough smarts, to always pick correctly between these two choices.
'Solution 2' uses a "covering" index INDEX(sorter, id). (The clue: "Using index".) This contains all the columns (only sorter and id) found anywhere in the query (select id from t order by sorter limit 1512123, 10000;), hence the index can (and usually will) be used in preference over scanning the table.
Another solution alluded to involved where id>=$i. This avoids the OFFSET. However, since you are using MyISAM, the index and the data cannot be "clustered" together. With InnoDB, the data is ordered according to the PRIMARY KEY. If that is id, then the query can start by jumping directly into the middle of the data (at $i). With MyISAM, what I just described is done in the BTree for INDEX(id); but it still has to bounce back and forth between that Btree and the .MYD file where the data is. (This is an example of where InnoDB's design is inherently more efficient than MyISAM's.)
If your goal is to get a bunch of random rows from a table, read my treatise. In summary, there are faster ways, but none is 'perfect', though usually "good enough".

MySQL optimizer issue with index on two columns and limit clause

We have this table:
CREATE TABLE `test_table` (
`id` INT NOT NULL AUTO_INCREMENT,
`time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
`value` FLOAT NOT NULL,
`session` INT NOT NULL,
PRIMARY KEY (`id`),
UNIQUE INDEX `session_time_idx` (`session` ASC,`time` ASC)
) ENGINE = InnoDB;
It is used to store different "measurement sessions" each resulting in potentially hundreds of thousands of rows. Different measurement sessions may have the same or overlapping timestamp ranges. We then need to randomly access single measurements with queries like this:
SELECT * FROM `test_table` WHERE `session` = 2 AND `time` < '2003-12-02' ORDER BY `time` DESC LIMIT 1;
We need to query for times which are spread uniformly on the measurement session. The "less than" operator is necessary because we don't know exactly when each measurement was taken, we just need to find the last measurement which was performed before a given date and time.
Depending on the time specified in the query, we have 2 possible resulting plans:
mysql> EXPLAIN SELECT * FROM `test_table` WHERE `session` = 2 AND `time` < '2003-12-02' ORDER BY `time` DESC LIMIT 1;
+----+-------------+------------+-------+------------------+------------------+---------+------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+------------+-------+------------------+------------------+---------+------+------+-------------+
| 1 | SIMPLE | test_table | range | session_time_idx | session_time_idx | 8 | NULL | 6050 | Using where |
+----+-------------+------------+-------+------------------+------------------+---------+------+------+-------------+
mysql> EXPLAIN SELECT * FROM `test_table` WHERE `session` = 2 AND `time` < '2005-01-02' ORDER BY `time` DESC LIMIT 1;
+----+-------------+------------+------+------------------+------------------+---------+-------+--------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+------------+------+------------------+------------------+---------+-------+--------+-------------+
| 1 | SIMPLE | test_table | ref | session_time_idx | session_time_idx | 4 | const | 127758 | Using where |
+----+-------------+------------+------+------------------+------------------+---------+-------+--------+-------------+
The first plan uses the whole index (session and time), typically resulting in sub-ms execution times on development machines.
The second plan uses only part of the index then scans the results of the whole session, sometimes hundreds of thousands of rows. Needless to say, the performance of the second plan is very poor. Tens of ms on development machines, which can become seconds on the slow production embedded devices.
The difference between the two queries is just the amount of rows which would match the query if no "LIMIT" clause was used. This makes sense when no "LIMIT" is specified because scanning the data directly can be an advantage instead of scanning both the second part of the index and the data. But MySQL doesn't seem to care about the fact that we only need one row: using the full index seems to be always the best choice in this case.
I made some tests which resulted in the following observations:
if I select just "id", "time" and/or "session" (not "value") the full index is used in all cases (because all needed data is in the indexes); so, while slightly cumbersome, querying first the "id" and then the rest of the data would work
using "FORCE INDEX (session_time_idx)" does fix the bad plan and results in fast queries all the times
no issue is present when using a single column index on time
running OPTIMIZE TABLE does not make any difference
using MyIASM instead if InnoDB makes no difference
using a simple integer instead of a TIMESTAMP makes no difference (as expected: TIMESTAMP is an integer after all)
I played with various parameters, including "max_seeks_for_key", but I couldn't fix the bad plan
Since we are using this kind of access pattern in many places and we have a custom ORM system, I'd like to know if there is a way to "convince" MySQL to do the right thing without having to add "FORCE INDEX" support to the ORM.
Any other suggestion for working around this problem would also be appreciated.
My setup: MySQL Server 5.5.47 on Ubuntu 14.04 64-bit.
Update: this also happens with MySQL Server 5.6 and 5.7.
For reference, this is the script I am using to create the test setup:
set ##time_zone = "+00:00";
drop schema if exists `index_test`;
create schema `index_test`;
use `index_test`;
CREATE TABLE `test_table` (
`id` INT NOT NULL AUTO_INCREMENT,
`time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
`value` FLOAT NOT NULL,
`session` INT NOT NULL,
PRIMARY KEY (`id`),
UNIQUE INDEX `session_time_idx` (`session` ASC,`time` ASC)
) ENGINE = InnoDB;
delimiter $$
CREATE PROCEDURE fill(total int)
BEGIN
DECLARE count int;
DECLARE countPerSs int;
DECLARE tim int;
set count = 0;
set countPerSs = 100000;
set tim = unix_timestamp('2000-01-01');
myloop: LOOP
insert into `test_table` set `value` = rand(), `session` = count div countPerSs, `time` = from_unixtime(tim);
set tim = tim + 10 * 60;
SET count = count + 1;
IF count < total THEN
ITERATE myloop;
END IF;
LEAVE myloop;
END LOOP myloop;
END;
$$
delimiter ;
call fill(500000);

Why is MySQL slow when using LIMIT in my query?

I'm trying to figure out why is one of my query slow and how I can fix it but I'm a bit puzzled on my results.
I have an orders table with around 80 columns and 775179 rows and I'm doing the following request :
SELECT * FROM orders WHERE id_state = 2 AND id_mp IS NOT NULL ORDER BY creation_date DESC LIMIT 200
which returns 38 rows in 4.5s
When removing the ORDER BY I'm getting a nice improvement :
SELECT * FROM orders WHERE id_state = 2 AND id_mp IS NOT NULL LIMIT 200
38 rows in 0.30s
But when removing the LIMIT without touching the ORDER BY I'm getting an even better result :
SELECT * FROM orders WHERE id_state = 2 AND id_mp IS NOT NULL ORDER BY creation_date DESC
38 rows in 0.10s (??)
Why is my LIMIT so hungry ?
GOING FURTHER
I was trying a few things before sending my answer and after noticing that I had an index on creation_date (which is a datetime) I removed it and the first query now runs in 0.10s. Why is that ?
EDIT
Good guess, I have indexes on the others columns part of the where.
mysql> explain SELECT * FROM orders WHERE id_state = 2 AND id_mp IS NOT NULL ORDER BY creation_date DESC LIMIT 200;
+----+-------------+--------+-------+------------------------+---------------+---------+------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------+-------+------------------------+---------------+---------+------+------+-------------+
| 1 | SIMPLE | orders | index | id_state_idx,id_mp_idx | creation_date | 5 | NULL | 1719 | Using where |
+----+-------------+--------+-------+------------------------+---------------+---------+------+------+-------------+
1 row in set (0.00 sec)
mysql> explain SELECT * FROM orders WHERE id_state = 2 AND id_mp IS NOT NULL ORDER BY creation_date DESC;
+----+-------------+--------+-------+------------------------+-----------+---------+------+-------+----------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------+-------+------------------------+-----------+---------+------+-------+----------------------------------------------------+
| 1 | SIMPLE | orders | range | id_state_idx,id_mp_idx | id_mp_idx | 3 | NULL | 87502 | Using index condition; Using where; Using filesort |
+----+-------------+--------+-------+------------------------+-----------+---------+------+-------+----------------------------------------------------+
Indexes do not necessarily improve performance. To better understand what is happening, it would help if you included the explain for the different queries.
My best guess would be that you have an index in id_state or even id_state, id_mp that can be used to satisfy the where clause. If so, the first query without the order by would use this index. It should be pretty fast. Even without an index, this requires a sequential scan of the pages in the orders table, which can still be pretty fast.
Then when you add the index on creation_date, MySQL decides to use that index instead for the order by. This requires reading each row in the index, then fetching the corresponding data page to check the where conditions and return the columns (if there is a match). This reading is highly inefficient, because it is not in "page" order but rather as specified by the index. Random reads can be quite inefficient.
Worse, even though you have a limit, you still have to read the entire table because the entire result set is needed. Although you have saved a sort on 38 records, you have created a massively inefficient query.
By the way, this situation gets significantly worse if the orders table does not fit in available memory. Then you have a condition called "thrashing", where each new record tends to generate a new I/O read. So, if a page has 100 records on it, the page might have to be read 100 times.
You can make all these queries run faster by having an index on orders(id_state, id_mp, creation_date). The where clause will use the first two columns and the order by will use the last.
Same problem happened in my project,
I did some test, and found out that LIMIT is slow because of row lookups
See:
MySQL ORDER BY / LIMIT performance: late row lookups
So, the solution is:
(A)when using LIMIT, select not all columns, but only the PK columns
(B)Select all columns you need, and then join with the result set of (A)
SQL should likes:
SELECT
*
FROM
orders O1 <=== this is what you want
JOIN
(
SELECT
ID <== fetch the PK column only, this should be fast
FROM
orders
WHERE
[your query condition] <== filter record by condition
ORDER BY
[your order by condition] <== control the record order
LIMIT 2000, 50 <== filter record by paging condition
) as O2
ON
O1.ID = O2.ID
ORDER BY
[your order by condition] <== control the record order
in my DB,
the old SQL which select all columns using "LIMIT 21560, 20", costs about 4.484s.
the new sql costs only 0.063s. The new one is about 71 times faster
I had a similar issue on a table of 2.5 million records. Removing the limit part the query took a few seconds. With the limit part it stuck forever.
I solved with a subquery. In your case it would became:
SELECT *
FROM
(SELECT *
FROM orders
WHERE id_state = 2
AND id_mp IS NOT NULL
ORDER BY creation_date DESC) tmp
LIMIT 200
I noted that the original query was fast when the number of selected rows was greater than the limit parameter. Se the query became extremely slow when the limit parameter was useless.
Another solution is trying forcing index. In your case you can try with
SELECT *
FROM orders force index (id_mp_idx)
WHERE id_state = 2
AND id_mp IS NOT NULL
ORDER BY creation_date DESC
LIMIT 200
Problem is that mysql is forced to sort data on the fly. My query of deep offset like:
ORDER BY somecol LIMIT 99990, 10
Took 2.5s.
I fixed it by creating a new table, which has presorted data by column somecol and contains only ids, and there the deep offset (without need to use ORDER BY) takes 0.09s.
0.1s is not still enough fast though. 0.01s would be better.
I will end up creating a table that holds the page number as special indexed column, so instead of doing limit x, y i will query where page = Z.
i just tried it and it is fast as 0.0013. only problem is, that the offseting is based on static numbers (presorted in pages by 10 items for example.. its not that big problem though.. you can still get out any data of any pages.)

Why my mysql answer that "not using key" when I use rand in where

I have a table that has 4,000,000 records.
The table is created that : (user_id int, partner_id int, PRIMARY_KEY ( user_id )) engine=InnoDB;
I want to test the performance of select 100 records.
Then, I tested following:
mysql> explain select user_id from MY_TABLE use index (PRIMARY) where user_id IN ( 1 );
+----+-------------+----------+-------+---------------+---------+---------+-------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+----------+-------+---------------+---------+---------+-------+------+-------------+
| 1 | PRIMARY | MY_TABLE | const | PRIMARY | PRIMARY | 4 | const | 1 | Using index |
+----+-------------+----------+-------+---------------+---------+---------+-------+------+-------------+
1 row in set, 1 warning (0.00 sec)
This is OK.
But, this query is buffered by mysql.
So, this test make no after the first test.
Then, I thinked of a sql that select by random value.
I tested following:
mysql> explain select user_id from MY_TABLE use index (PRIMARY) where user_id IN ( select ceil( rand() ) );
+----+-------------+----------+-------+---------------+---------+---------+------+---------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+----------+-------+---------------+---------+---------+------+---------+--------------------------+
| 1 | PRIMARY | MY_TABLE | index | NULL | PRIMARY | 4 | NULL | 3998727 | Using where; Using index |
+----+-------------+----------+-------+---------------+---------+---------+------+---------+--------------------------+
But, it's bad.
Explain shows that possible_keys is NULL.
So, full index scanning is planned, and in fact, it's too slow rather than the one before.
Then, I want to ask you to teach me how do I write random value with index looking up.
Thanks
Using rand() in SQL is usually a sure-fire way to make the query slow. A common theme here is people using it in ORDER BY to get a random sequence. It's slow because not only does it throw away the indexes, but it also reads through the whole table.
However in your case, the fact that the function calls are in a sub-query ought to allow the outer query to still use its indexes. The fact that it isn't seems quite odd (so I've given the question a +1 vote).
My theory is that perhaps MySQL's optimiser is getting it wrong -- it's seeing the functions in the inner query, and deciding incorrectly that it can't use an index.
The only thing I can suggest to work around that is using force index to push MySQL into using the index you want.
See the definition of rand().
If i understand right, you are trying to get a random record from the database. If that is the case, again from the rand() definition:
ORDER BY RAND() combined with LIMIT is useful for selecting a random sample from a set of rows:
SELECT * FROM table1, table2 WHERE a=b AND c<d -> ORDER BY RAND() LIMIT 1000;
It's a limitation of the MySQL optimizer, that it can't tell that the subquery returns exactly one value, it has to assume the subquery returns multiple rows with unpredictable values, potentially even all the values of user_id. Therefore it decides it's just going to do an index scan.
Here's a workaround:
mysql> explain select user_id from MY_TABLE use index (PRIMARY)
where user_id = ( select ceil( rand() ) );
Note that MySQL's RAND() function returns a value in the range 0 <= v < 1.0. If you CEIL() it, you'll likely get the value 1. Therefore you'll virtually always get the row where user_id=1. If you don't have such a row in your table, you'll get an empty set result. You certainly won't get a user chosen randomly among all your users.
To fix that problem, you'd have to multiply the rand() by the number of distinct user_id values. And that brings up the problem that you might have gaps, so a randomly chosen value won't match any existing user_id.
Re your comment:
You'll always see possible keys as NULL when you get an index scan (i.e., "type" is "index").
I tried your explain query on a similar table, and it appears that the optimizer can't figure out that the subquery is a constant expression. You can workaround this limitation by calculating the random number in application code and then using the result as a constant value in your query:
select user_id from MY_TABLE use index (PRIMARY)
where user_id = $random;

mysql select from n last rows

I have a table with index (autoincrement) and integer value. The table is millions of rows long.
How can I search if a certain number appear in the last n rows of the table most efficiently?
Starting from the answer given by #chaos, but with a few modifications:
You should always use ORDER BY if you use LIMIT. There is no implicit order guaranteed for an RDBMS table. You may usually get rows in the order of the primary key, but you can't rely on this, nor is it portable.
If you order by in the descending order, you don't need to know the number of rows in the table beforehand.
You must give a correlation name (aka table alias) to a derived table.
Here's my version of the query:
SELECT `id`
FROM (
SELECT `id`, `val`
FROM `big_table`
ORDER BY `id` DESC
LIMIT $n
) AS t
WHERE t.`val` = $certain_number;
Might be a very late answer, but this is good and simple.
select * from table_name order by id desc limit 5
This query will return a set of last 5 values(last 5 rows) you 've inserted in your table
Last 5 rows retrieve in mysql
This query working perfectly
SELECT * FROM (SELECT * FROM recharge ORDER BY sno DESC LIMIT 5)sub ORDER BY sno ASC
or
select sno from(select sno from recharge order by sno desc limit 5) as t where t.sno order by t.sno asc
Take advantage of SORT and LIMIT as you would with pagination. If you want the ith block of rows, use OFFSET.
SELECT val FROM big_table
where val = someval
ORDER BY id DESC
LIMIT n;
In response to Nir:
The sort operation is not necessarily penalized, this depends on what the query planner does. Since this use case is crucial for pagination performance, there are some optimizations (see link above). This is true in postgres as well "ORDER BY ... LIMIT can be done without sorting " E.7.1. Last bullet
explain extended select id from items where val = 48 order by id desc limit 10;
+----+-------------+-------+-------+---------------+---------+---------+-------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------+---------------+---------+---------+-------+------+-------------+
| 1 | SIMPLE | items | const | PRIMARY | PRIMARY | 4 | const | 1 | Using index |
+----+-------------+-------+-------+---------------+---------+---------+-------+------+-------------+
because it is autoincrement, here's my take:
Select * from tbl
where certainconditionshere
and autoincfield >= (select max(autoincfield) from tbl) - $n
I know this may be a bit old, but try using PDO::lastInsertId. I think it does what you want it to, but you would have to rewrite your application to use PDO (Which is a lot safer against attacks)