I am trying to use user defined variables to limit the results of a subquery, in order to get the difference between two timestamps in some analytics data. The code I am working with is as follows:
SELECT #visitID := `s`.`visit_id` AS `visit_id`, # Get the visit ID and assign to a variable
#dt := `s`.`dt` AS `visit`, # Get the timestamp of the visit and assign to a variable
`tmp`.`dt` AS `next-visit` # Get the 'next visit' timestamp which should be returned by the subquery
FROM `wp_slim_stats` AS `s` # From the main table...
LEFT JOIN (SELECT `s`.`visit_id`, # Start the subquery
MIN(`s`.`dt`) as `dt` # Get the lowest timestamp returned
FROM `wp_slim_stats` AS `s` # ...from the same table
WHERE `s`.`visit_id` = #visitID # Visit ID should be the same as the row the main query is working on
AND `s`.`dt` > #dt # Timestamp should be HIGHER than the row we are working on
LIMIT 0, 1) as `tmp` ON `tmp`.`visit_id` = `s`.`visit_id` # Join on visit_id
WHERE `s`.`resource` LIKE 'foo%' # Limit all results to the page we are looking for
The intention is to get an individual pageview and record its visit ID and the timestamp. The subquery should then return the next record from the database with the same visit ID. I can then subtract one from the other to get the seconds spent on a page.
The problem I am running into is that the subquery seems to be re-evaluating for each row returned, and not populating the next-visit column until the end. This means that all the rows returned are matched against the subquery's results for the final row, thus all next-visit columns are null apart from the final row.
The results I am looking for would be something like:
_________________________________________________
| visit_id | visit | next-visit|
|--------------|---------------|----------------|
| 1 | 123456789 | 123457890 |
|--------------|---------------|----------------|
| 4 | 234567890 | 234567891 |
|--------------|---------------|----------------|
| 6 | 345678901 | 345678902 |
|--------------|---------------|----------------|
| 8 | 456789012 | 456789013 |
|______________|_______________|________________|
But I am getting
_________________________________________________
| visit_id | visit | next-visit|
|--------------|---------------|----------------|
| 1 | 123456789 | NULL |
|--------------|---------------|----------------|
| 4 | 234567890 | NULL |
|--------------|---------------|----------------|
| 6 | 345678901 | NULL |
|--------------|---------------|----------------|
| 8 | 456789012 | 456789013 |
|______________|_______________|________________|
I am still fairly new to using variables in mySQL, particularly when assigning them dynamically. As I mentioned, I think I am messing up the order of operations somewhere, which is causing the subquery to re-populate each row at the end.
Ideally I need to be able to do this in pure mySQL due to restrictions that from the client, so no PHP unfortunately. Is it possible to do what I am trying to do?
Thank you!
You don't need variables here at all.
SELECT `s`.`visit_id` AS `visit_id`,
`s`.`dt` AS `visit`,
(SELECT MIN(dt) FROM `wp_slim_stats` ws WHERE ws.visit_id = s.visit_id AND ws.dt > s.dt)
FROM `wp_slim_stats` AS `s`
WHERE `s`.`resource` LIKE 'foo%'
And to answer why your solution doesn't work, have a look at the order of operations in a sql query:
FROM clause
WHERE clause
GROUP BY clause
HAVING clause
SELECT clause
ORDER BY clause
Here's the query you would need to run.
select visits.visitid as vId, temp.time as tTime, visits.time as vTime
from visits inner join (select min(id) as firstId, visitid, time from
visits v1 group by visitid)temp on visits.visitid = temp.visitid where
id > temp.firstid group by visits.visitid;
See this SQL fiddle
Related
I have design an event where you register multiple fishes and I wanted a query to extract the top 3 heaviest fishes from different people. In case of tie, it should be decided by a third parameter: who registered it first. I've tested several ways I found here on stack overflow but none of them worked the way I needed.
My schema is the following:
id | playerid | playername | itemid | weight | date | received | isCurrent
Where:
id = PK, AUTO_INCREMENT - it's basically an index
playerid = the unique code of the person who registered the fish
playername = name of the person who registered the fish
itemid = the code of the fish
weight = the weight of the fish
date = pre-defined as CURRENT_TIMESTAMP, the exact time the fish was registered
received = pre-defined as 0, it really don't matter for this analysis
isCurrent = pre-defined as 1, basically every time this event runs it updates this field to 0, meaning the registers don't belong to the current version of the event.
Here you can see the data I'm testing with
my problem is: How to avoid counting the same playerid for this rank more than once?
Query 1:
SELECT `playerid`, `playername`, `itemid`, `weight`
FROM `event_fishing`
WHERE `isCurrent` = 1 AND `weight` IN (
SELECT * FROM
(SELECT MAX(`weight`) as `fishWeight`
FROM `event_fishing`
WHERE `isCurrent` = 1
GROUP BY `playerid`
LIMIT 3) as t)
ORDER BY `weight` DESC, `date` ASC
LIMIT 3
Query 2:
SELECT * FROM `event_fishing`
INNER JOIN
(SELECT playerid, MAX(`weight`) as `fishWeight`
FROM `event_fishing`
WHERE `isCurrent` = 1
GROUP BY `playerid`
LIMIT 3) as t
ON t.playerid = `event_fishing`.playerid AND t.fishWeight = `event_fishing`.weight
WHERE `isCurrent` = 1
ORDER BY weight DESC, date ASC
LIMIT 3
Keep in mind that I must return at least the fields: playerid, playername, itemid, weight, that the version of the event must be the actual (isCurrent = 1), one playerid per line with the heaviest weight he registered for this version of the event and the date is registered.
Expected output for the data I've sent:
id |playerid|playername|itemid|weight| date |received| isCurrent
7 | 3734 |Mago Xxx | 7963 | 1850 | 2018-07-26 00:17:41 | 0 | 1
14 | 228 |Night Wolf| 7963 | 1750 | 2018-07-26 19:45:49 | 0 | 1
8 | 3646 |Test Spell| 7159 | 1690 | 2018-07-26 01:16:51 | 0 | 1
Output I'm getting (with both queries):
playerid|playername|itemid|weight
3734 |Mago Xxx | 7963 | 1850
228 |Night Wolf| 7963 | 1750
228 |Night Wolf| 7963 | 1750
Thank you for the attention.
EDIT: I've followed How can I SELECT rows with MAX(Column value), DISTINCT by another column in SQL? since my query is very similar to the accepted answer, in the comments I've found something that at a first glance seem to have solved my problem but I've found a case where the accepted answer fail. Check http://sqlfiddle.com/#!9/72aeef/1
If you take a look at data you'll notice that the id 14 was the first input of 1750 and therefore should be second place, but the MAX(id) returns the last input of the same playerid and therefore give us a wrong result.
Despite the problems seems alike, mine has a greater complexity and therefore the queries that were suggested doesn't work
EDIT 2:
I've managed to solve my problem with the following query:
http://sqlfiddle.com/#!9/d711c7/6
But I'll leave this question open because of two things:
1- I don't know if there's a case where this query might fail
2- Despite we limit a lot the first query, I still think this can be more optimized, so I'll leave it open to any one that might know a better way to solve the issue.
Looking for a query that takes the following table ProductList
id| column_1 | column_2 | Sum
================================
1 | Product-A | Product-B | 67
2 | Product-A | Product-C | 55
3 | Product-A | Product-D | 23
4 | Product-B | Product-C | 95
5 | Product-C | Product-D | 110
and returns the first record Product-A_Product-B and then skips all records that contain Product-A or Product-B in either column and returns Product-C_Product-D.
I only want to return the row if everything in the row is appearing for the first time.
Assuming that the products don't contain ,, you could use a comma-delimited session variable to store already selected products and check for every row if one of the columns is already contained in that variable:
select column_1, column_2
from (
select l.*,
case when find_in_set(l.column_1, #products) or find_in_set(l.column_2, #products)
then 1
else (#products := concat(#products, ',', l.column_1, ',', l.column_2)) = ''
end as skip
from ProductList l
cross join (select #products := '') init
order by l.id
) t
where skip = 0;
Demo: http://rextester.com/NDVBW87988
But you should know the risks:
ORDER BY in a subquery is not really valid and usually doesn't make sence. The engine may skip it or move it to the outer query.
If you read and write the same session variable in one statement, the execution order is not defined. So the query might not work for all (future) versions.
I am new in MySQL, I am trying to find:
The difference between a given day's maximum value occurred and the previous day's maximum value.
I was able to get the maximum values for dates via:
select max(`bundle_count`), `Production_date`
from `table`
group by `Production_date`
But I don't know how to use SQL to calculate the differences between maximums for two given dates.
am expecting output like this
Please help me.
Update 1: Here is a fiddle, http://sqlfiddle.com/#!2/818ad/2, that I used for testing.
Update 2: Here is a fiddle, http://sqlfiddle.com/#!2/3f78d/10 that I used for further refining/fixing, based on Sandy's comments.
Update 3: For some reason the case where there is no previous day was not being dealt with correctly. I thought it was. However, I've updated to make sure that works (a bit cumbersome--but it appears to be right. Last fiddle: http://sqlfiddle.com/#!2/3f78d/45
I think #Grijesh conceptually got you the main thing you needed via the self-join of the input data (so make sure you vote up his answer!). I've cleaned up his query a bit on syntax (building off of his query!):
SELECT
DATE(t1.`Production_date`) as theDate,
MAX( t1.`bundle_count` ) AS 'max(bundle_count)',
MAX( t1.`bundle_count` ) -
IF(
EXISTS
(
SELECT date(t2.production_date)
FROM input_example t2
WHERE t2.machine_no = 1 AND
date_sub(date(t1.production_date), interval 1 day) = date(t2.production_date)
),
(
SELECT MAX(t3.bundle_count)
FROM input_example t3
WHERE t3.machine_no = 1 AND
date_sub(date(t1.production_date), interval 1 day) = date(t3.production_date)
GROUP BY DATE(t3.production_date)
), 0
)
AS Total_Bundles_Used
FROM `input_example` t1
WHERE t1.machine_no = 1
GROUP BY DATE( t1.`production_date` )
Note 1: I think #Grijesh and I were cleaning up the query syntax issues at the same time. It's encouraging that we ended up with very similar versions after we were both doing cleanup. My version differs in using IFNULL() for when there is no preceding data. I also ended up with a DATE_SUB, and I made sure to reduce various dates to mere dates without time component, via DATE()
Note 2: I originally had not fully understood your source tables, so I thought I needed to implement a running count in the query. But upon better inspection, it's clear that your source data already has a running count, so I took that stuff back out.
I am not sure but you need something like this, Hope it will be helpful to you upto some extend:
Try this:
SELECT t1.`Production_date` ,
MAX(t1.`bundle_count`) - MAX(t2.`bundle_count`) ,
COUNT(t1.`bundle_count`)
FROM `table_name` AS t1
INNER JOIN `table_name` AS t2
ON ABS(DATEDIFF(t1.`Production_date` , t2.`Production_date`)) = 1
GROUP BY t1.`Production_date`
EDIT
I create a table name = 'table_name', as below,
mysql> SELECT * FROM `table_name`;
+---------------------+--------------+
| Production_date | bundle_count |
+---------------------+--------------+
| 2004-12-01 20:37:22 | 1 |
| 2004-12-01 20:37:22 | 2 |
| 2004-12-01 20:37:22 | 3 |
| 2004-12-02 20:37:22 | 2 |
| 2004-12-02 20:37:22 | 5 |
| 2004-12-02 20:37:22 | 7 |
| 2004-12-03 20:37:22 | 6 |
| 2004-12-03 20:37:22 | 7 |
| 2004-12-03 20:37:22 | 2 |
| 2004-12-04 20:37:22 | 1 |
| 2004-12-04 20:37:22 | 9 |
+---------------------+--------------+
11 rows in set (0.00 sec)
My query: to find difference in bundle_count between two consecutive dates:
SELECT t1.`Production_date` ,
MAX(t2.`bundle_count`) - MAX(t1.`bundle_count`) ,
COUNT(t1.`bundle_count`)
FROM `table_name` AS t1
INNER JOIN `table_name` AS t2
ON ABS(DATEDIFF(t1.`Production_date` , t2.`Production_date`)) = 1
GROUP BY t1.Production_date;
its output:
+---------------------+-------------------------------------------------+--------------------------+
| Production_date | MAX(t2.`bundle_count`) - MAX(t1.`bundle_count`) | COUNT(t1.`bundle_count`) |
+---------------------+-------------------------------------------------+--------------------------+
| 2004-12-01 20:37:22 | 4 | 9 |
| 2004-12-02 20:37:22 | 0 | 18 |
| 2004-12-03 20:37:22 | 2 | 15 |
| 2004-12-04 20:37:22 | -2 | 6 |
+---------------------+-------------------------------------------------+--------------------------+
4 rows in set (0.00 sec)
This is PostgreSQL syntax (sorry; it's what I'm familiar with) but should fundamentally work in either database. Note this doesn't exactly run in PostgreSQL either because group is not a valid table name (it's a reserved keyword). The approach is a self-join as others have mentioned but I've used a view to handle the max-by-day and the difference as separate steps.
create view max_by_day as
select
date_trunc('day', production_date) as production_date,
max(bundle_count) as bundle_count
from
group
group by
date_trunc('day', production_date);
select
today.production_date as production_date,
today.bundle_count,
today.bundle_count - coalesce(yesterday.bundle_count, 0)
from
max_by_day as today
left join max_by_day yesterday on (yesterday.production_date = today.production_date - '1 day'::interval)
order by
production_date;
PostgreSQL also has a construct called window functions which is useful for this and a bit easier to understand. Just had to stick in a bit of advocacy for a superior database. :-P
select
date_trunc('day', production_date),
max(bundle_count),
max(bundle_count) - lag(max(bundle_count), 1, 0)
over
(order by date_trunc('day', production_date))
from
group
group by
date_trunc('day', production_date);
These two approaches differ in how they handle missing days in the data - the first will treat it as a 0, the second will use the previous day which is present. There wasn't a case like this in your sample so I don't know if this is something you care about.
I have a database with a created_at column containing the datetime in Y-m-d H:i:s format.
The latest datetime entry is 2011-09-28 00:10:02.
I need the query to be relative to the latest datetime entry.
The first value in the query should be the latest datetime entry.
The second value in the query should be the entry closest to 7 days from the first value.
The third value should be the entry closest to 7 days from the second value.
REPEAT #3.
What I mean by "closest to 7 days from":
The following are dates, the interval I desire is a week, in seconds a week is 604800 seconds.
7 days from the first value is equal to 1316578202 (1317183002-604800)
the value closest to 1316578202 (7 days) is... 1316571974
unix timestamp | Y-m-d H:i:s
1317183002 | 2011-09-28 00:10:02 -> appear in query (first value)
1317101233 | 2011-09-27 01:27:13
1317009182 | 2011-09-25 23:53:02
1316916554 | 2011-09-24 22:09:14
1316836656 | 2011-09-23 23:57:36
1316745220 | 2011-09-22 22:33:40
1316659915 | 2011-09-21 22:51:55
1316571974 | 2011-09-20 22:26:14 -> closest to 7 days from 1317183002 (first value)
1316499187 | 2011-09-20 02:13:07
1316064243 | 2011-09-15 01:24:03
1315967707 | 2011-09-13 22:35:07 -> closest to 7 days from 1316571974 (second value)
1315881414 | 2011-09-12 22:36:54
1315794048 | 2011-09-11 22:20:48
1315715786 | 2011-09-11 00:36:26
1315622142 | 2011-09-09 22:35:42
I would really appreciate any help, I have not been able to do this via mysql and no online resources seem to deal with relative date manipulation such as this. I would like the query to be modular enough to be able to change the interval weekly, monthly, or yearly. Thanks in advance!
Answer #1 Reply:
SELECT
UNIX_TIMESTAMP(created_at)
AS unix_timestamp,
(
SELECT MIN(UNIX_TIMESTAMP(created_at))
FROM my_table
WHERE created_at >=
(
SELECT max(created_at) - 7
FROM my_table
)
)
AS `random_1`,
(
SELECT MIN(UNIX_TIMESTAMP(created_at))
FROM my_table
WHERE created_at >=
(
SELECT MAX(created_at) - 14
FROM my_table
)
)
AS `random_2`
FROM my_table
WHERE created_at =
(
SELECT MAX(created_at)
FROM my_table
)
Returns:
unix_timestamp | random_1 | random_2
1317183002 | 1317183002 | 1317183002
Answer #2 Reply:
RESULT SET:
This is the result set for a yearly interval:
id | created_at | period_index | period_timestamp
267 | 2010-09-27 22:57:05 | 0 | 1317183002
1 | 2009-12-10 15:08:00 | 1 | 1285554786
I desire this result:
id | created_at | period_index | period_timestamp
626 | 2011-09-28 00:10:02 | 0 | 0
267 | 2010-09-27 22:57:05 | 1 | 1317183002
I hope this makes more sense.
It's not exactly what you asked for, but the following example is pretty close....
Example 1:
select
floor(timestampdiff(SECOND, tbl.time, most_recent.time)/604800) as period_index,
unix_timestamp(max(tbl.time)) as period_timestamp
from
tbl
, (select max(time) as time from tbl) most_recent
group by period_index
gives results:
+--------------+------------------+
| period_index | period_timestamp |
+--------------+------------------+
| 0 | 1317183002 |
| 1 | 1316571974 |
| 2 | 1315967707 |
+--------------+------------------+
This breaks the dataset into groups based on "periods", where (in this example) each period is 7-days (604800 seconds) long. The period_timestamp that is returned for each period is the 'latest' (most recent) timestamp that falls within that period.
The period boundaries are all computed based on the most recent timestamp in the database, rather than computing each period's start and end time individually based on the timestamp of the period before it. The difference is subtle - your question requests the latter (iterative approach), but I'm hoping that the former (approach I've described here) will suffice for your needs, since SQL doesn't lend itself well to implementing iterative algorithms.
If you really do need to determine each period based on the timestamp in the previous period, then your best bet is going to be an iterative approach -- either using a programming language of your choice (like php), or by building a stored procedure that uses a cursor.
Edit #1
Here's the table structure for the above example.
CREATE TABLE `tbl` (
`id` int(10) unsigned NOT NULL auto_increment PRIMARY KEY,
`time` datetime NOT NULL
)
Edit #2
Ok, first: I've improved the original example query (see revised "Example 1" above). It still works the same way, and gives the same results, but it's cleaner, more efficient, and easier to understand.
Now... the query above is a group-by query, meaning it shows aggregate results for the "period" groups as I described above - not row-by-row results like a "normal" query. With a group-by query, you're limited to using aggregate columns only. Aggregate columns are those columns that are named in the group by clause, or that are computed by an aggregate function like MAX(time)). It is not possible to extract meaningful values for non-aggregate columns (like id) from within the projection of a group-by query.
Unfortunately, mysql doesn't generate an error when you try to do this. Instead, it just picks a value at random from within the grouped rows, and shows that value for the non-aggregate column in the grouped result. This is what's causing the odd behavior the OP reported when trying to use the code from Example #1.
Fortunately, this problem is fairly easy to solve. Just wrap another query around the group query, to select the row-by-row information you're interested in...
Example 2:
SELECT
entries.id,
entries.time,
periods.idx as period_index,
unix_timestamp(periods.time) as period_timestamp
FROM
tbl entries
JOIN
(select
floor(timestampdiff( SECOND, tbl.time, most_recent.time)/31536000) as idx,
max(tbl.time) as time
from
tbl
, (select max(time) as time from tbl) most_recent
group by idx
) periods
ON entries.time = periods.time
Result:
+-----+---------------------+--------------+------------------+
| id | time | period_index | period_timestamp |
+-----+---------------------+--------------+------------------+
| 598 | 2011-09-28 04:10:02 | 0 | 1317183002 |
| 996 | 2010-09-27 22:57:05 | 1 | 1285628225 |
+-----+---------------------+--------------+------------------+
Notes:
Example 2 uses a period length of 31536000 seconds (365-days). While Example 1 (above) uses a period of 604800 seconds (7-days). Other than that, the inner query in Example 2 is the same as the primary query shown in Example 1.
If a matching period_time belongs to more than one entry (i.e. two or more entries have the exact same time, and that time matches one of the selected period_time values), then the above query (Example 2) will include multiple rows for the given period timestamp (one for each match). Whatever code consumes this result set should be prepared to handle such an edge case.
It's also worth noting that these queries will perform much, much better if you define an index on your datetime column. For my example schema, that would look like this:
ALTER TABLE tbl ADD INDEX idx_time ( time )
If you're willing to go for the closest that is after the week is out then this'll work. You can extend it to work out the closest but it'll look so disgusting it's probably not worth it.
select unix_timestamp
, ( select min(unix_tstamp)
from my_table
where sql_tstamp >= ( select max(sql_tstamp) - 7
from my_table )
)
, ( select min(unix_tstamp)
from my_table
where sql_tstamp >= ( select max(sql_tstamp) - 14
from my_table )
)
from my_table
where sql_tstamp = ( select max(sql_tstamp)
from my_table )
My query is something like this
SELECT * FROM tbl1
JOIN tbl2 ON something = something
WHERE 1 AND (tbl2.date = '$date' OR ('$date' BETWEEN tbl1.planA AND tbl1.planB ))
When I run this query, it is considerably slower than for example this query
SELECT * FROM tbl1
JOIN tbl2 ON something = something
WHERE 1 AND ('$date' BETWEEN tbl1.planA AND tbl1.planB )
or
SELECT * FROM tbl1
JOIN tbl2 ON something = something
WHERE 1 AND tbl2.date = '$date'
In localhost, the first query takes about 0.7 second, the second query about 0.012 second and the third one 0.008 second.
My question is how do you optimize this? If currently I have 1000 rows in my tables and it takes 0.7 second to display the first query, it will take 7 seconds if I have 10.000 rows right? That's a massive slow down compared to second query (0.12 second) and third (0.08).
I've tried adding indexes, but the result is no different.
Thanks
Edit : This application will only work locally, so no need to worry about the speed over the web.
Sorry, I didn't include the EXPLAIN because my real query are much more complicated (about 5 joins). But the joins (I think) don't really matter, cos I've tried omitting them and still get approximately the same result as above.
The date belongs to tbl1, planA and planB belongs to tbl2. I've tried adding indexes to tbl1.date, tbl2.planA and tbl2.planB but the result is insignificant.
By schema do you mean MyISAM or InnoDB? It's MyISAM.
Okay, I'll just post my query straight away. Hopefully it's not that confusing.
SELECT *
FROM tb_joborder jo
LEFT JOIN tb_project p ON jo.project_id = p.project_id
LEFT JOIN tb_customer c ON p.customer_id = c.customer_id
LEFT JOIN tb_dispatch d ON jo.joborder_id = d.joborder_id
LEFT JOIN tb_joborderitem ji ON jo.joborder_id = ji.joborder_id
LEFT JOIN tb_mix m ON ji.mix_id = m.mix_id
WHERE dispatch_date = '2011-01-11'
OR '2011-01-11'
BETWEEN planA
AND planB
GROUP BY jo.joborder_id
ORDER BY customer_name ASC
And the describe output
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE jo ALL NULL NULL NULL NULL 453 Using temporary; Using filesort
1 SIMPLE p eq_ref PRIMARY PRIMARY 4 db_dexada.jo.project_id 1
1 SIMPLE c eq_ref PRIMARY PRIMARY 4 db_dexada.p.customer_id 1
1 SIMPLE d ALL NULL NULL NULL NULL 2048 Using where
1 SIMPLE ji ALL NULL NULL NULL NULL 455
1 SIMPLE m eq_ref PRIMARY PRIMARY 4 db_dexada.ji.mix_id 1
You can just use UNION to merge results of 2nd and 3d queries.
More about UNION.
First thing that comes to mind is to union the two:
SELECT * FROM tbl1
JOIN tbl2 ON something = something
WHERE 1 AND ('$date' BETWEEN planA AND planB )
UNION ALL
SELECT * FROM tbl1
JOIN tbl2 ON something = something
WHERE 1 AND date = '$date'
You have provided too little to make optimizations. We don't know anything about your data structures.
Even if most slow queries are usually due to the query itself or index setup of the used tables, you can try to find out where your bottleneck is with using the MySQL Query Profiler, too. It has been implemented into MySQL since Version 5.0.37.
Before you start your query, activate the profiler with this statement:
mysql> set profiling=1;
Now execute your long query.
With
mysql> show profiles;
you can now find out what internal number (query number) your long query has.
If you now execute the following query, you'll get alot of details about what took how long:
mysql> show profile for query (insert query number here);
(example output)
+--------------------+------------+
| Status | Duration |
+--------------------+------------+
| (initialization) | 0.00005000 |
| Opening tables | 0.00006000 |
| System lock | 0.00000500 |
| Table lock | 0.00001200 |
| init | 0.00002500 |
| optimizing | 0.00001000 |
| statistics | 0.00009200 |
| preparing | 0.00003700 |
| executing | 0.00000400 |
| Sending data | 0.00066600 |
| end | 0.00000700 |
| query end | 0.00000400 |
| freeing items | 0.00001800 |
| closing tables | 0.00000400 |
| logging slow query | 0.00000500 |
+--------------------+------------+
This is a more general, administrative approach, but can help narrow down or even find out the cause for slow queries very nice.
A good tutorial on how to use the MySQL Query Profiler can be found here in the MySQL articles.