How can I make a difference to get time between first and last line during my SQL query ?
Datas are the following ones:
|--------|-----------|---------------------|
| EST_Id | EST_Token | EST_Datetime |
|--------|-----------|---------------------|
| 1 | vexef | 2020-10-17 16:13:01 |
| 2 | vexef | 2020-10-17 16:13:21 |
| 3 | vexef | 2020-10-17 16:14:31 |
| 4 | vexef | 2020-10-17 16:13:51 |
| 5 | fardd | 2020-10-17 17:00:11 |
| 6 | fardd | 2020-10-17 17:00:17 |
| 7 | fardd | 2020-10-17 17:00:19 |
|--------|-----------|---------------------|
For example for the token vexef I should have 50 seconds.
This is what I have tried:
SELECT *, TIMEDIFF(MAX(EST_Datetime), MIN(EST_Datetime)) AS diff FROM table GROUP BY EST_Token ORDER BY EST_Id ASC
The query looks working but diff returns me 0.
Thanks.
You seem to want aggregation:
select est_token, timestampdiff(second, min(est_datetime), max(est_datetime)) diff
from mytable
group by est_token
This gives you one row per est_token, with the difference between the earliest and latest est_datetime, expressed in seconds.
Related
I want to find the average of the following data via mysql query (assume these are 719 rows).
| 1 |
| 3 |
| 1 |
| 2 |
| 2 |
| 1 |
| 1 |
| 2 |
| 1 |
| 1 |
| 2 |
| 1 |
| 2 |
| 1 |
| 2 |
| 1 |
| 1 |
+----------+
719 rows in set (2.43 sec)
SELECT COUNT(*) FROM osdial_agent_log WHERE DATE(event_time)='2015-11-01' GROUP BY lead_id;
I ran this query to get that data
Can someone help me to find the average for the above data.
Use
SELECT AVG(total)
FROM (SELECT COUNT(*) AS total
FROM osdial_agent_log
WHERE DATE(event_time)='2015-11-01'
GROUP BY lead_id) t
It's important to know that the date will be unknown during the query time, so I cannot just hard code a 'WHERE' clause.
Here's my table:
+-----------+----------+-------------+
| Date_ID | Customer | Order_Count |
+-----------+----------+-------------+
| 20150101 | Jones | 6 |
| 20150102 | Jones | 4 |
| 20150103 | Jones | 3 |
+-----------+----------+-------------+
Here's the desired output:
+-----------+----------+------------------+
| Date_ID | Customer | SUM(Order_Count) |
+-----------+----------+------------------+
| 20150101 | Jones | 6 |
| 20150102 | Jones | 10 |
| 20150103 | Jones | 13 |
+-----------+----------+------------------+
My guess is I need to use a variable or perhaps a join.
Edit: still not able to get it fast enough. very slow.
Try this query; it's most likely the best you can do without limiting the dataset you operate on. It should benefit from an index (customer, date_id).
select
t1.date_id, t1.customer, sum(t2.order_count)
from
table1 t1
left join
table1 t2 on t1.customer = t2.customer
and t1.date_id >= t2.date_id
group by
t1.date_id, t1.customer;
Sample SQL Fiddle.
One way you could go about it is by using a sub query which sums all orders up till the current order. Probably not the fastest way, but it should do the trick.
SELECT `Date_ID`, `Customer`,
(SELECT sum(b.`Order_Count`)
FROM tablename as b WHERE
b.`Date_ID` <= a.`Date_ID` AND
a.`customer = b.`Customer`)
FROM tablename as a
Where performance is an issue, consider a solution akin the following:
SELECT * FROM ints;
+---+
| i |
+---+
| 0 |
| 1 |
| 2 |
| 3 |
| 4 |
| 5 |
| 6 |
| 7 |
| 8 |
| 9 |
+---+
SELECT i,#i:=#i+i FROM ints, (SELECT #i:=0)n ORDER BY i;
+---+----------+
| i | #i:=#i+i |
+---+----------+
| 0 | 0 |
| 1 | 1 |
| 2 | 3 |
| 3 | 6 |
| 4 | 10 |
| 5 | 15 |
| 6 | 21 |
| 7 | 28 |
| 8 | 36 |
| 9 | 45 |
+---+----------+
you can consider this solution
select Date_ID,
Customer,
SUM(Order_COunt) over (order by Date_ID, Customer rows unbounded preceding) as SUM(Order_COunt)
from table
I have table in mysql like
| service_code | charges | caller_number | duration | minutes |
+--------------+---------+---------------+----------+---------+
| 10 | 15 | 8281490235 | 00:00:00 | 1.0000 |
| 11 | 12 | 9961621709 | 00:00:00 | 0.0000 |
| 10 | 15 | 8281490235 | 01:00:44 | 60.7333 |
| 11 | 2 | 9744944316 | 01:00:44 | 60.7333 |
+--------------+---------+---------------+----------+---------+
from this table I want to get charges*minutes for each separate caller_number.
I have done like this
SELECT sum(charges*minutes) as cost from t8_m4_bill groupby caller_number
but I am not getting expected output. Please help?
SELECT caller_number,sum(charges*minutes) as cost
from t8_m4_bill
group by caller_number
order by caller_number
I want to list top 6 race records with unique holder only. I mean a holder gets in the list shouldn't be listed with his another record. I currently use the query below to list top 6 times.
mysql> select * from racerecords order by record_time asc, date asc;
+----+---------+------------+-------------+---------------------+----------+
| id | race_id | holder | record_time | date | position |
+----+---------+------------+-------------+---------------------+----------+
| 2 | 10 | Stav | 15 | 2014-08-11 19:43:49 | 1 |
| 1 | 10 | Jennifer | 15 | 2014-08-13 19:43:19 | 1 |
| 4 | 10 | Jennifer | 16 | 2014-08-02 19:44:27 | 1 |
| 5 | 10 | Osman | 17 | 2014-08-04 19:44:57 | 1 |
| 7 | 10 | Gokhan | 18 | 2014-08-15 19:45:37 | 1 |
| 3 | 10 | MotherLode | 25 | 2014-08-01 19:44:11 | 1 |
+----+---------+------------+-------------+---------------------+----------+
6 rows in set (0.00 sec)
As you can see the holder "Jennifer" is listed twice. I want mySQL to skip her after she got in the list. The result I want to be generated is:
+----+---------+------------+-------------+---------------------+----------+
| id | race_id | holder | record_time | date | position |
+----+---------+------------+-------------+---------------------+----------+
| 2 | 10 | Stav | 15 | 2014-08-11 19:43:49 | 1 |
| 1 | 10 | Jennifer | 15 | 2014-08-13 19:43:19 | 1 |
| 5 | 10 | Osman | 17 | 2014-08-04 19:44:57 | 1 |
| 7 | 10 | Gokhan | 18 | 2014-08-15 19:45:37 | 1 |
| 3 | 10 | MotherLode | 25 | 2014-08-01 19:44:11 | 1 |
+----+---------+------------+-------------+---------------------+----------+
I tried everything. GROUP BY holder generates wrong results. It gets the very first record of the holder, even though is not the best. In this table it generates an output like above because id:1 is the first record I inserted for Jennifer.
How can I generate output a result like above?
Desired result can be achieved through this query but it performance intensive. I have reproduced the result in SQLFilddle http://sqlfiddle.com/#!2/f8ee7/3
select * from racerecords
where
(HOLDER, RECORD_TIME) in (
select HOLDER,min(RECORD_TIME) from racerecords
group by HOLDER)
Seems you have missed to include the Where clause in the sub-query. Try this
select * from racerecords
where
(HOLDER, RECORD_TIME) in (
select HOLDER,min(RECORD_TIME) from racerecords where race_id =17
group by HOLDER )
And race_id =17
Order by RECORD_TIME
you should use distinct clause
SELECT DISTINCT column_name,column_name
FROM table_name;
looks this http://www.w3schools.com/sql/sql_distinct.asp
i have some queries which group datasets and count them, e.g.
SELECT COUNT(*)
FROM `table`
GROUP BY `column`
now i have the number of rows for which column is the same, so far so good.
problem is: how do i get the aggregate (min/max/avg/sum) values for those “grouped” counts. using a subquery sure is the easiest, but i was wondering if this is possible within this single query
For min and max you can ORDER BY and fetch the first row. For sum/avg/other aggregates you would need a subquery.
In MySQL you should be able to do this all at once. My tests seem to indicate that this works.
| date | hits |
|-------------------|
| 2009-10-10 | 3 |
| 2009-10-10 | 6 |
| 2009-10-10 | 1 |
| 2009-10-10 | 3 |
| 2009-10-11 | 12 |
| 2009-10-11 | 4 |
| 2009-10-11 | 8 |
-------------------
SELECT COUNT(*), MAX(hits), SUM(hits) FROM table GROUP BY date
| COUNT(*) | MAX(hits) |
|-----------|-----------|
| 4 | 6 |
| 3 | 12 |
-----------------------
SUM, MIN and AVG also work. Is this what you are looking for?
| date | hits |
|-------------------|
| 2009-10-10 | 3 |
| 2009-10-10 | 6 |
| 2009-10-10 | 1 |
| 2009-10-10 | 3 |
| 2009-10-11 | 12 |
| 2009-10-11 | 4 |
| 2009-10-11 | 8 |
I think knittl was trying to do something like this:
select min(hits), max(hits), avg(hits), sum(hits)<br>
from table
group by date