speed up most recent query - mysql

I am trying to get the 3 successful (success =1) recent records and then see their average response time.
I have manipulated the results so that the average response is always 2ms.
I have 20,000 records in this table right now, but I plan on have 1-2 million. It takes 40 seconds just with 20,000 records, so I need to optimize this query.
Here is the fiddle: http://sqlfiddle.com/#!9/dc91eb/1/0
The fiddle contains my indices too, so I am open to adding more indices if needed.
SELECT proxy,
Avg(a.responsems) AS avgResponseMs,
COUNT(*) as Count
FROM proxylog a
WHERE
a.success = 1
AND ( (SELECT Count(0)
FROM proxylog b
WHERE ( ( b.success = a.success )
AND ( b.proxy = a.proxy )
AND ( b.datetime >= a.datetime ) )) <= 3 )
GROUP BY proxy
ORDER BY avgResponseMs
Here is the result of EXPLAIN
+----+--------------------+-------+-------+----------------+-------+---------+---------------------+-------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+-------+-------+----------------+-------+---------+---------------------+-------+----------------------------------------------+
| 1 | PRIMARY | a | index | NULL | proxy | 61 | NULL | 19110 | Using where; Using temporary; Using filesort |
+----+--------------------+-------+-------+----------------+-------+---------+---------------------+-------+----------------------------------------------+
| 2 | DEPENDENT SUBQUERY | b | ref | proxy,datetime | proxy | 52 | wwwim_iroom.a.proxy | 24 | Using where; Using index |
+----+--------------------+-------+-------+----------------+-------+---------+---------------------+-------+----------------------------------------------+
Before you suggest windowed functions, I am using MariaDB 10.1.21 which is ~Mysql 5.6 AFAIK

An index on (success, proxy, datetime, responsems) should help. success, proxy and datetime are the columns shared between both queries. datetime should come after the other two, because it is used to filter a range whereas the other two filter on a point. responsems comes last as this is the column the calculation is done on. That way the needed values can be taken directly from the index.
And please edit the question and include the DDL and DML also in the question it self. The fiddle might be down some day and the question therefore useless for future readers.

I was able to mimic row_number and follow #Gordon Linoff answer
SELECT pl.proxy, Avg(pl.responsems) AS avgResponseMs, COUNT(*) as Count
FROM (
SELECT
#row_number:=CASE
WHEN #g = proxy
THEN #row_number + 1
ELSE 1
END AS RN,
#g:=proxy g,
pl.*
FROM proxyLog pl,
(SELECT #g:=0,#row_number:=0) as t
WHERE pl.success = 1
ORDER BY proxy,datetime DESC
) pl
WHERE RN <= 3
GROUP BY proxy
ORDER BY avgResponseMs

From your comment back to my question, I think I know what your problem is.
If you have a proxy that has 900 requests, your first is still counting 900 (at or greater). Second is counting 899, Third, 898 and so on. That is what is killing your performance. Now add that to having millions of records will choke the crud out of your query.
What you may want to do is have a max date applied to the first one you are querying against where it makes reasonable sense. If you have proxy requests such as times are (and all are success values)
8:00:00
8:00:18
8:00:57
9:02:12
9:15:27
Do really care about the success time between 8:00:57 and 9:02 and 9:15? If a computer is getting pounded with activity in one hour vs light in another, is that really a fair assessment of success times?
What you MAY want is to have some (your discretion) cutoff time, such as within 3 minutes. What if someone does not even resume work going through a proxy for some time. Is that really it? Again, your discresion
AND ( a.datetime <= b.datetime and b.datetime < date_add( a.datetime, interval 5 minutes )) )) <= 3 )
And the <= 3 is not giving you what I THINK you expect. Again, your innermost COUNT(*) is counting all records >= a.datetime, so it would not be until you were at the end of a given batch of proxy times that you would get these counts.
So are you looking for the HISTORICAL average times, or only the most recent 3 time cycles for a given proxy. What you are requesting and querying may be two completely different things.
You may want to edit your original post to clarify. I end here until I hear back to possible offer additional assistance.

I would advise you to try writing the query using window functions:
SELECT pl.proxy, Avg(pl.responsems) AS avgResponseMs, COUNT(*) as Count
FROM (SELECT pl.*,
ROW_NUMBER() OVER (PARTITION BY pl.proxy ORDER BY datetime DESC) as seqnum
FROM proxylog pl
WHERE pl.success = 1
) pl
WHERE seqnum <= 3
GROUP BY proxy
ORDER BY avgResponseMs;
For this, you want an index on proxylog(success, proxy, datetime, responsems).
In older versions, I would replace your version of the subquery with:
SELECT pl.proxy, Avg(pl.responsems) AS avgResponseMs, COUNT(*) as Count
FROM (SELECT pl.*,
ROW_NUMBER() OVER (PARTITION BY pl.proxy ORDER BY datetime DESC) as seqnum
FROM proxylog pl
WHERE
) pl
WHERE pl.success = 1 AND
pl.datetime >= ANY (SELECT pl2.datetime
FROM proxylog pl2
WHERE pl2.success = pl.success AND
pl2.proxy = pl.proxy
ORDER BY pl2.datetime DESC
LIMIT 1 OFFSET 2
)
GROUP BY proxy
ORDER BY avgResponseMs;
The index you want for this is the same as above.

Related

Time difference between adjacent rows in one column of one mysql table

I have a table with some 100.000 rows having this structure:
+------+---------------------+-----------+
| id | timestamp | eventType |
+------+---------------------+-----------+
| 12 | 2015-07-01 16:45:47 | 3001 |
| 103 | 2015-07-10 19:30:14 | 3001 |
| 1174 | 2015-09-03 12:57:08 | 3001 |
+------+---------------------+-----------+
For each row, I would like to calculate the days between the timestamp of this and the previous row.
As you can see, the id is not continuous, this the table contains different events and I would like to compare only the timestamp of one specific event over time.
I know, that for the comparison of tow datas, DATEDIFF can be used, and I would define the two rows with a query, that selects the row by the specific id.
But as I have many 1000 rows, I am searching for a way to somehow loop through the whole table.
Unfortunately my sql knowledge is limited and searching did not reveal an example, close enough to my question, that I would continue form there.
I would be very thankful for any hint.
If you are running MySQL 8.0, you can just use lag(). Say you want the difference in seconds:
select t.*,
timestampdiff(
second,
lag(timestamp) over(partition by eventtype order by id),
timestamp
) diff
from mytable t
In earlier versions, one alternative is a correlated subquery:
select t.*,
timestampdiff(
second,
(select timestamp from mytable t1 where t1.eventtype = t.eventtype and t1.id < t.id order by t1.id desc limit 1),
timestamp
) diff
from mytable t

passing a value in SQL

I'm not sure I picked the correct title, but I did my best to explain what I am trying to do. I am just learning about joins and I have two tables that I am trying to combine in a certain way, but they both have WHERE clauses.
I started out by building both SELECT statements separately. Here is my first one from the table: "shipping_zones"
SELECT MIN(cal_zone) AS output_zone
FROM (
SELECT carrier, dest_zip, origin_zip, zone, MIN(zone) OVER(PARTITION BY carrier) as cal_zone
FROM shipping_zones z
WHERE (origin_zip = 402 OR origin_zip = 950) AND dest_zip = 015
) as t
WHERE zone=cal_zone;
This returns:
+-------------+
| output_zone |
+-------------+
| 5 |
+-------------+
My second table is: "shipping_prices" and my query is:
SELECT carrier, speed, zone, min_price
FROM (SELECT carrier, zone, speed, price, MIN(price) OVER(PARTITION BY speed) as min_price
FROM shipping_prices
WHERE total_wt = 66 and zone = 6
) t
WHERE price=min_price
ORDER BY speed DESC;
and the result is:
+---------+-------+------+-----------+
| carrier | speed | zone | min_price |
+---------+-------+------+-----------+
| fedex | slow | 6 | 45.66 |
| usps | med | 6 | 96.05 |
| usps | fast | 6 | 347.15 |
+---------+-------+------+-----------+
What I want to do is "pass" the value for output_zone from the first query as an "argument" into the 2nd query. I put argument word in quotes because I'm not sure that is the correct word.
I the best to accomplish this in SQL is to use a join correct? I understand the basic syntax of a join but am a bit lost because of clauses I'm using in both (WHERE, MIN, ORDER BY, etc.)
EDIT: This data is bring queried with Impala and was created in MySQL before being imported into HDFS with HIVE.
EDIT2: I should also mention that the "shipping_prices" table already has a field in it called "zone". So I guess I wouldn't be "passing" it so much as using its value from the output of the first query to find the appropriate tuples in the "shipping_prices" table.
Any help or tips would be appreciated.
Simply you can put your first query into one zone in (first_query) statement to replace zone=6.
The codes will be like below:
SELECT carrier, speed, zone, min_price
FROM (SELECT carrier, zone, speed, price, MIN(price) OVER(PARTITION BY speed) as min_price
FROM shipping_prices
WHERE total_wt = 66
and zone in (
SELECT MIN(cal_zone) AS output_zone
FROM (
SELECT carrier, dest_zip, origin_zip, zone, MIN(zone) OVER(PARTITION BY carrier) as cal_zone
FROM shipping_zones z
WHERE (origin_zip = 402 OR origin_zip = 950) AND dest_zip = 015
) as t
WHERE zone=cal_zone;
)
) t
WHERE price=min_price
ORDER BY speed DESC;
It seems you are using Mysql 8.0 (Development Release), The Mysql Engine will do reasonable query optimization which will most likely rewrite both IN and JOIN queries to the same plan. Check this URL for the details Convert IN to JOIN

SQL: Previous Column empty when setting AVG()

Ok, I am a little noobie when it comes to SQL. In fact very muchly so, so I apologize if this is self evident.
I am trying to find out 3 things from database (This table is a log of every message sent):
Total Reply Time
Total # of Replies that were Under 10 Mins
Average Reply Time
Here is my SQL:
SELECT
*, SUM(case when tmp.reply_time <= 10 then 1 else 0 end) as under_10_mins,
COUNT(tmp.reply_time) AS total_replies
FROM
(SELECT
TIMESTAMPDIFF(MINUTE, `date`, reply_date) as reply_time
FROM
tme_email_staff_reply sr
JOIN
tme_user u
ON
u.id = sr.staff_id
JOIN
tme_email_message m
ON
m.id = sr.message_id
WHERE
`reply_date` >= '2017-04-01 00:00:00'
AND
`reply_date` < '2017-04-27 00:00:00'
)
AS tmp
Which outputs:
| reply_time | under_10_mins | total_replies |
| 106 | 165 | 375 |
Now, when I add in:
SELECT
*, SUM(case when tmp.reply_time <= 10 then 1 else 0 end) as under_10_mins,
COUNT(tmp.reply_time) AS total_replies
FROM
(SELECT
TIMESTAMPDIFF(MINUTE, `date`, reply_date) as reply_time,
(AVG(TIMESTAMPDIFF(SECOND, `date`, reply_date))/60) AS average_reply_time
FROM
tme_email_staff_reply sr
JOIN
tme_user u
ON
u.id = sr.staff_id
JOIN
tme_email_message m
ON
m.id = sr.message_id
WHERE
`reply_date` >= '2017-04-01 00:00:00'
AND
`reply_date` < '2017-04-27 00:00:00'
)
AS tmp
my response is:
| reply_time | average_reply_time |under_10_mins | total_replies |
| 106 | 149.08626667 | 0 | 1 |
As you can see, the under_10_mins and total_replies fields have changed.
Schema for tables linked:
tme_email_staff_reply:
id | staff_id | message_id | reply_date |
1 | 234,221,001 | 15fg16d5dgw2 | 2017-04-01 09:34:16 |
tme_user
id | username | password | email | dob | gender |
// data omited
tme_email_message
id | thread_id | From | To | subject | message | message_id
// data omited
Can anyone tell me why this is so? and how to fix it?
Why this is so?
Let's see AVG:
AVG([DISTINCT] expr)
Returns the average value of expr. The DISTINCT option can be used to return the average of the distinct values of expr.
If there are no matching rows, AVG() returns NULL.
And doc in 13.19.1 Aggregate (GROUP BY) Function Descriptions also said:
If you use a group function in a statement containing no GROUP BY clause, it is equivalent to grouping on all rows. For more information, see Section 13.19.3, “MySQL Handling of GROUP BY”.
This means in your subquery, you used avg without group by, this will avg all the rows, then return one row in subquery.
How to fix it?
I think you should move avg from subquery to outer query:
SELECT
SUM(case when tmp.reply_time <= 10 then 1 else 0 end) as under_10_mins,
COUNT(tmp.reply_time) AS total_replies,
AVG(average_reply_time) AS average_reply_time
FROM
(SELECT
TIMESTAMPDIFF(MINUTE, `date`, reply_date) as reply_time,
(TIMESTAMPDIFF(SECOND, `date`, reply_date))/60 AS average_reply_time
FROM
tme_email_staff_reply sr
JOIN
tme_user u
ON
u.id = sr.staff_id
JOIN
tme_email_message m
ON
m.id = sr.message_id
WHERE
`reply_date` >= '2017-04-01 00:00:00'
AND
`reply_date` < '2017-04-27 00:00:00'
)
AS tmp
The issue is because, in your nested query, you are referring to nonaggregated columns not named in the GROUP BY clause on a MySQL version under 5.7.5. See documentation, notice that: The server is free to choose any value from each group.
MySQL < 5.7.5 allow this syntax but has special behaviour (your case):
MySQL extends the standard SQL use of GROUP BY so that the select list can refer to nonaggregated columns not named in the GROUP BY clause. You can use this feature to get better performance by avoiding unnecessary column sorting and grouping. However, this is useful primarily when all values in each nonaggregated column not named in the GROUP BY are the same for each group. The server is free to choose any value from each group, so unless they are the same, the values chosen are indeterminate. Furthermore, the selection of values from each group cannot be influenced by adding an ORDER BY clause. Result set sorting occurs after values have been chosen, and ORDER BY does not affect which values within each group the server chooses.
MySQL >= 5.7.5 allow this syntax and checks for functional dependence:
MySQL 5.7.5 and up implements detection of functional dependence. If the ONLY_FULL_GROUP_BY SQL mode is enabled (which it is by default), MySQL rejects queries for which the select list, HAVING condition, or ORDER BY list refer to nonaggregated columns that are neither named in the GROUP BY clause nor are functionally dependent on them.

Why is this MySQL query slow?

I have the following query, all relevant columns are indexed correctly. MySQL version 5.0.8. The query takes forever:
SELECT COUNT(*) FROM `members` `t` WHERE t.member_type NOT IN (1,2)
AND ( SELECT end_date FROM subscriptions s
WHERE s.sub_auth_id = t.member_auth_id AND s.sub_status = 'Completed'
AND s.sub_pkg_id > 0 ORDER BY s.id DESC LIMIT 1 ) < curdate( )
EXPLAIN output:
----+--------------------+-------+-------+-----------------------+---------+---------+------+------+-------------
id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra
----+--------------------+-------+-------+-----------------------+---------+---------+------+------+-------------
1 | PRIMARY | t | ALL | membership_type | NULL | NULL | NULL | 9610 | Using where
----+--------------------+-------+-------+-----------------------+---------+---------+------+------+-------------
2 | DEPENDENT SUBQUERY | s | index | subscription_auth_id, | PRIMARY | 4 | NULL | 1 | Using where
| | | | subscription_pkg_id, | | | | |
| | | | subscription_status | | | | |
----+--------------------+-------+-------+-----------------------+---------+---------+------+------+-------------
Why?
Your subselect refers to values in the parent query. This is known as a correlated (dependent) subquery, and such a query has to be executed once for every row in the parent query, which often leads to poor performance. It is often faster to rewrite the query as a JOIN, for example like this
(Note: without a sample schema to test with, it is impossible to say in advance if this will be faster and still correct, you might need to adjust it a little):
SELECT COUNT(*) FROM members t
LEFT JOIN (
SELECT sub_auth_id as member_id, max(id) as sid FROM subscriptions
WHERE sub_status = 'Completed'
AND sub_pkg_id > 0
GROUP BY sub_auth_id
LEFT JOIN (
SELECT id AS subid, end_date FROM subscriptions
WHERE sub_status = 'Completed'
AND sub_pkg_id > 0
) sdate ON sid = subid
) sub ON sub.member_id = t.member_auth_id
WHERE t.member_type NOT IN (1,2)
AND sub.end_date < curdate( )
The logic here is:
For each member, find his latest subscription.
For each latest subscription, find its end date.
Join these member-latest_sub_date pair to the members list.
Filter the list.
Your query is slow because as written you are considering 9,610 rows and therefore performing 9,610 SELECT subqueries in your WHERE clause. You really should rewrite your query to JOIN the members and subscriptions tables first, to which your WHERE conditions could still apply.
EDIT: Try this.
SELECT COUNT(*)
FROM `members` `t`
JOIN subscriptions s ON (s.sub_auth_id = t.member_auth_id)
WHERE t.member_type NOT IN (1,2)
AND s.sub_status = 'Completed'
AND s.sub_pkg_id > 0
AND end_date < curdate()
ORDER BY s.id DESC LIMIT 1
Caveat: I'm not a MySQL expert, but pretty good in a different SQL flavour (VFP), but I believe you will save some time if:
You count just one field, let's say memberid, instead of *.
Your comparison NOT IN (1,2) is replaced with > 2 (provided that is valid).
The ORDER BY in your subselect is unnecessary, I think. You're trying to get the last completed subscription?
The < curdate() should be inside your subselect's WHERE.
(SELECT end_date FROM subscriptions s
WHERE s.end_date < curdate() and s.sub_auth_id = t.member_auth_id AND
s.sub_status = 'Completed' AND s.sub_pkg_id > 0 ORDER BY s.id DESC LIMIT 1 )
Tune your subselect so as to trim down the set as quickly as possible. The first conditional should be the one least likely to occur.
I ended up doing it like this:
select count(*) from members t
JOIN subscriptions s ON s.sub_auth_id = t.member_auth_id
WHERE t.membership_type > 2 AND s.sub_status = 'Completed' AND s.sub_pkg_id > 0
AND s.sub_end_date < curdate( )
AND s.id = (SELECT MAX(ss.id) FROM subscriptions ss WHERE ss.sub_auth_id = t.member_auth_id)
I believe that the problem is due to a bug that won't be fixed until MySQL 6.

Optimizing unexplainably slow MySQL query

I'm losing hair on a stupid query. First, I would explain what's its goal. I have a set of values fetched every hour and stored in the DB. These values can increase or stay equal with time. This query extracts the latest value day by day for latest 60 days (I have twins query for extract lastest value by weeks and months, they are similar). The query is self explanatory:
SELECT l.value AS value
FROM atable AS l
WHERE l.time = (
SELECT MAX(m.time)
FROM atable AS m
WHERE DATE(l.time) = DATE(m.time)
LIMIT 1
)
ORDER BY l.time DESC
LIMIT 60
It looks no special. But it's extremely slow (> 30 secs), considering time is an index and table contains less than 5000 rows. And I'm sure the problem is with sub-query.
Where is the noob mistake?
Update 1: Same situation if I avoid MAX() using SELECT m.time ... ORDER BY m.time DESC.
Update 2: Seems is not a problem with DATE() function called to many times. I've tried to create a calculated field day DATE. The UPDATE atable SET day = DATE(time) runs in less than 2secs. The modified query, with l.day = m.day (no functions!), runs in the same exactly time as before.
The main issue I see is using DATE() on the left of the expression in the WHERE clause. Using the function DATE() on both sides of the WHERE expression explicitly prevents MySQL from using an index on the date field. Instead, it must scan all rows to apply the function on each row.
Instead of this:
WHERE DATE(l.time) = DATE(m.time)
Try something like this:
WHERE l.time BETWEEN
DATE_SUB(m.date, INTERVAL TIME_TO_SEC(m.date) SECOND)
AND DATE_ADD(DATE_SUB(m.date, INTERVAL TIME_TO_SEC(m.date) SECOND), INTERVAL 86399 SECOND)
Maybe you know of a better way to turn m.date into a range like 2012-02-09 00:00:00 and 2012-02-09 23:59:59 than the above example, but the idea is that you want to keep the left side of the expression as the raw column name, l.time in this case, and give it a range in the form of two constants (or two expressions that can be converted to constants) on the right side.
EDIT
I'm using your pre-calculated day field:
SELECT *
FROM atable a
WHERE a.time IN
(SELECT MAX(time)
FROM atable
GROUP BY day
ORDER BY day DESC
LIMIT 60)
At least here, the inner query is only ran once, and then a binary search is done with the IN cluase. You're still scanning the table, but just once, and the advantage of the inner query being run just once will probably make a huge dent.
If you know that you have values for every day, you could improve that inner query by adding a WHERE clause, limiting it to the last 60 calendar days, and losing the LIMIT 60. Make sure that day and time are indexed.
Instead of using MAX(m.time) do the following in the sub-select
SELECT m.time
FROM table AS m
WHERE DATE(l.time) = DATE(m.time)
ORDER BY m.time DESC
LIMIT 1
This might help speed up the query since it is giving the query parser an alternative
However one other piece i noticed is you are using the DATE(l.time) and DATE(m.time) which if your index is not created on DATE(m.time) then you will not be using the index and hence could cause slowness.
Based on the feedback answer, if the entries are sequentially added via date/time, directly correlated to the auto-increment ID, who cares about the TIME... get the auto-inc number for exact, non-ambiguous join
select
A1.AutoID,
A1.time,
A1.Value
from
( select date( A2.time ) as SingleDate,
max( A2.AutoID ) as MaxAutoID
from aTable A2
where date( A2.Time ) >= date( date_sub( now(), interval 60 day ))
group by date( A2.time ) ) into MaxPerDate
JOIN aTable A1
on MaxPerDate.MaxAutoID = A1.AutoID
order by
A1.AutoID DESC
You could use the "explain" statement to get mysql to tell you what it's doing.
EXPLAIN SELECT l.value AS value
FROM table AS l
WHERE l.time = (
SELECT MAX(m.time)
FROM table AS m
WHERE DATE(l.time) = DATE(m.time) LIMIT 1
)
ORDER BY l.time DESC LIMIT 60
That should at least give you an insight where to look further.
If you have an index on time, I would suggest getting TOP 1 instead of MAX as follows:
SELECT l.value AS value
FROM table AS l
WHERE l.time = (
SELECT TOP 1 m.time
FROM table AS m
ORDER BY m.time DESC LIMIT 1
)
ORDER BY l.time DESC LIMIT 60
Your outer query is using a filesort without indexes.
Try changing to InnoDB engine to see if it improves things.
Doing a quick test:
mysql> show create table atable\G
*************************** 1. row ***************************
Table: atable
Create Table: CREATE TABLE `atable` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`t` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
KEY `t` (`t`)
) ENGINE=InnoDB AUTO_INCREMENT=51 DEFAULT CHARSET=utf8
1 row in set (0.00 sec)
mysql> explain SELECT id FROM atable AS l WHERE l.t = ( SELECT MAX(m.t) FROM atable AS m WHERE DATE(l.t) = DATE(m.t) LIMIT 1 ) ORDER BY l.t DESC LIMIT 50;
+----+--------------------+-------+-------+---------------+------+---------+------+------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+-------+-------+---------------+------+---------+------+------+--------------------------+
| 1 | PRIMARY | l | index | NULL | t | 4 | NULL | 50 | Using where; Using index |
| 2 | DEPENDENT SUBQUERY | m | index | NULL | t | 4 | NULL | 50 | Using where; Using index |
+----+--------------------+-------+-------+---------------+------+---------+------+------+--------------------------+
2 rows in set (0.00 sec)
After changing to MyISAM:
mysql> explain SELECT id FROM atable AS l WHERE l.t = ( SELECT MAX(m.t) FROM atable AS m WHERE DATE(l.t) = DATE(m.t) LIMIT 1 ) ORDER BY l.t DESC LIMIT 50;
+----+--------------------+-------+-------+---------------+------+---------+------+------+-----------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+-------+-------+---------------+------+---------+------+------+-----------------------------+
| 1 | PRIMARY | l | ALL | NULL | NULL | NULL | NULL | 50 | Using where; Using filesort |
| 2 | DEPENDENT SUBQUERY | m | index | NULL | t | 4 | NULL | 50 | Using where; Using index |
+----+--------------------+-------+-------+---------------+------+---------+------+------+-----------------------------+
2 rows in set (0.00 sec)