Incorrect ordering on query with group by clause - mysql

So I have the following query:
SELECT sensor.id as `sensor_id`,
sensor_reading.id as `reading_id`,
sensor_reading.reading as `reading`,
from_unixtime(sensor_reading.reading_timestamp) as `reading_timestamp`,
sensor_reading.lower_threshold as `lower_threshold`,
sensor_reading.upper_threshold as `upper_threshold`,
sensor_type.units as `unit`
FROM sensor
LEFT JOIN sensor_reading ON sensor_reading.sensor_id = sensor.id
LEFT JOIN sensor_type ON sensor.sensor_type_id = sensor_type.id
WHERE sensor.company_id = 1
GROUP BY sensor_reading.sensor_id
ORDER BY sensor_reading.reading_timestamp DESC
There are three tables in play here. A sensor_type table, which is just used for a single display field (units), a sensor table, which contains information on a sensor, and a sensor_reading table, which contains the individual readings for a sensor. There are multiple readings which apply to a single sensor, and so each entry in the sensor_reading table has a sensor_id which is linked to the ID field in the sensor table with a foreign key constraint.
In theory, this query should return the most recent sensor_reading for EACH unique sensor. Instead, it's returning the first reading for each sensor instead. I've seen a few posts on here with similar issues, but haven't been able to resolve this using any of their answers. Ideally, the query needs to be as efficient as possible, as this table has several thousand readings (and continues to grow).
Does anyone know how I might change this query to return the most recent reading? If I remove the GROUP BY clause, it returns the right order, but I then have to sift through the data to get the most recent for each sensor.
Ideally, I don't want to run sub-queries as this slows things down a lot, and speed is a big factor here.
Thanks!

In theory, this query should return the most recent sensor_reading for EACH unique sensor.
This is a fairly common misconception with the MySQL Group by extension, that allows you to select columns with no aggregation that are not contained in the group by clause. What the documentation states is:
The server is free to choose any value from each group, so unless they are the same, the values chosen are indeterminate. Furthermore, the selection of values from each group cannot be influenced by adding an ORDER BY clause
So since you are grouping by sensor_reading.sensor_id, MySQL will chose any row from sensor_reading for each sensor_id, then after choosing one row for each sensor_id it will then apply the ordering to the rows that are chosen.
Since you only want the latest row for each sensor, the general approach would be:
SELECT *
FROM sensor_reading AS sr
WHERE NOT EXISTS
( SELECT 1
FROM sensor_reading AS sr2
WHERE sr2.sensor_id = sr.sensor_id
AND sr2.reading_timestamp > sr.reading_timestamp
);
However, MySQL will optimise LEFT JOIN/IS NULL better than NOT EXISTS so a MySQL specific solution would be:
SELECT sr.*
FROM sensor_reading AS sr
LEFT JOIN sensor_reading AS sr2
ON sr2.sensor_id = sr.sensor_id
AND sr2.reading_timestamp > sr.reading_timestamp
WHERE sr2.id IS NULL;
So incorporating this into your query, you would end up with:
SELECT sensor.id as `sensor_id`,
sensor_reading.id as `reading_id`,
sensor_reading.reading as `reading`,
from_unixtime(sensor_reading.reading_timestamp) as `reading_timestamp`,
sensor_reading.lower_threshold as `lower_threshold`,
sensor_reading.upper_threshold as `upper_threshold`,
sensor_type.units as `unit`
FROM sensor
LEFT JOIN sensor_reading
ON sensor_reading.sensor_id = sensor.id
LEFT JOIN sensor_type
ON sensor.sensor_type_id = sensor_type.id
LEFT JOIN sensor_reading AS sr2
ON sr2.sensor_id = sensor_reading.sensor_id
AND sr2.reading_timestamp > sensor_reading.reading_timestamp
WHERE sensor.company_id = 1
AND sr2.id IS NULL
ORDER BY sensor_reading.reading_timestamp DESC;
An alternative method for getting the maximum per group is to inner join back to the latest row, so something like:
SELECT sr.*
FROM sensor_reading AS sr
INNER JOIN
( SELECT sensor_id, MAX(reading_timestamp) AS reading_timestamp
FROM sensor_reading
GROUP BY sensor_id
) AS sr2
ON sr2.sensor_id = sr.sensor_id
AND sr2.reading_timestamp = sr.reading_timestamp;
You may find that this is more efficient than the other method, or you may not, YMMV. It basically depends on your data and indexes, and as you have said, subqueries can be an issue in MySQL due to the fact that the full result is matierialised initially.

Related

Getting LIMIT value from table in SQL

I am trying to run an insert statement in SQL for a fixed time. So far I have tried this and it works as I wanted, but is there any way to combine these two parts ?
INSERT INTO assigns (AgencyName, ScoutID, RequestID)
SELECT AgencyName, ScoutID, RequestID
FROM employs NATURAL JOIN Scout NATURAL JOIN agency_response NATURAL JOIN Request
WHERE Answer = #option AND AgencyName = #agency_name
LIMIT 1;
This inserts into assigns table 1 time. But I have the desired LIMIT value in the table that I obtained from NATURAL JOIN's. In this case it is stored in NumberOfScouts. Below returns 8 for example and I want to limit to 8.
SELECT NumberOfScouts
FROM employs NATURAL JOIN Scout NATURAL JOIN agency_response NATURAL JOIN Request
WHERE Answer = #option AND AgencyName = #agency_name;
Is there any way to get the value of integer used in LIMIT from the table I used. I tried to put LIMIT to upper parts of query but it gave syntax error.
You can use window functions:
WITH asr as (
SELECT ?.AgencyName, ?.ScoutID, ?.RequestID, ?.NumberOfScouts,
ROW_NUMBER() OVER (ORDER BY ?.AgencyName) as seqnum
FROM employs e JOIN
Scout s
ON ?? JOIN -- JOIN conditions here
agency_response ar
ON ?? JOIN -- JOIN conditions here
Request r
ON ?? -- JOIN conditions here
WHERE ?.Answer = #option AND ?.AgencyName = #agency_name
)
SELECT ?.AgencyName, ?.ScoutID, ?.RequestID
FROM asr
WHERE seqnum <= NumberOfScouts;
Notes:
Use table aliases which are abbreviations of table names.
Qualify all column references so you -- and everyone else -- knows what table columns come from.
Use JOIN with ON/USING so you know what columns are used for the JOINs.
I describe NATURAL JOIN as an abomination because it does not use properly declared foreign key relationships for the JOIN condition. Plus, most of my tables have createdAt and createdBy columns which would confuse the so-called "natural" join.

Cross-Apply bad for a larger database or alternatives perform better?

so 2 (more so 3) questions, is my query just badly coded or thought out ? (be kind, I only just discovered cross apply and relatively new) and is corss-apply even the best sort of join to be using or why is it slow?
So I have a database table (test_tble) of around 66 million records. I then have a ##Temp_tble created which has one column called Ordr_nbr (nchar (13)). This is basically ones I wish to find.
The test_tble has 4 columns (Ordr_nbr, destination, shelf_no, dte_bought).
This is my current query which works the exact way I want it to but it seems to be quite slow performance.
select ##Temp_tble.Ordr_nbr, test_table1.destination, test_table1.shelf_no,test_table1.dte_bought
from ##MyTempTable
cross apply(
select top 1 test_table.destination,Test_Table.shelf_no,Test_Table.dte_bought
from Test_Table
where ##MyTempTable.Order_nbr = Test_Table.order_nbr
order by dte_bought desc)test_table1
If the ##Temp_tble only has 17 orders to search for it take around 2 mins. As you can see I'm trying to get just the most recent dte_bought or to some max(dte_bought) of each order.
In term of index I ran database engine tuner and it says its optimized for the query and I have all relative indexes created such as clustered index on test_tble for dte_bought desc including order_nbr etc.
The execution plan is using a index scan(on non_clustered) and a key lookup(on clustered).
My end result is it to return all the order_nbrs in ##MyTempTble along with columns of destination, shelf_no, dte_bought in relation to that order_nbr, but only the most recent bought ones.
Sorry if I explained this awfully, any info needed that I can provide just ask. I'm not asking for just downright "give me code", more of guidance,advice and learning. Thank you in advance.
UPDATE
I have now tried a sort of left join, it works reasonably quicker but still not instant or very fast (about 30 seconds) and it also doesn't return just the most recent dte_bought, any ideas? see below for left join code.
select a.Order_Nbr,b.Destination,b.LnePos,b.Dte_bought
from ##MyTempTble a
left join Test_Table b
on a.Order_Nbr = b.Order_Nbr
where b.Destination is not null
UPDATE 2
Attempted another let join with a max dte_bought, works very but only returns the order_nbr, the other columns are NULL. Any suggestion?
select a.Order_nbr,b.Destination,b.Shelf_no,b.Dte_Bought
from ##MyTempTable a
left join
(select * from Test_Table where Dte_bought = (
select max(dte_bought) from Test_Table)
)b on b.Order_nbr = a.Order_nbr
order by Dte_bought asc
K.M
Instead of CROSS APPLY() you can use INNER JOIN with subquery. Check the following query :
SELECT
TempT.Ordr_nbr
,TestT.destination
,TestT.shelf_no
,TestT.dte_bought
FROM ##MyTempTable TempT
INNER JOIN (
SELECT T.destination
,T.shelf_no
,T.dte_bought
,ROW_NUMBER() OVER(PARTITION BY T.Order_nbr ORDER BY T.dte_bought DESC) ID
FROM Test_Table T
) TestT
ON TestT.Id=1 AND TempT.Order_nbr = TestT.order_nbr

Query with multiple table joins taking too much time despite indexing

Query-
SELECT SUM(sale_data.total_sale) as totalsale, `sale_data_temp`.`customer_type_cy` as `customer_type`, `distributor_list`.`customer_status` FROM `distributor_list` LEFT JOIN `sale_data` ON `sale_data`.`depo_code` = `distributor_list`.`depo_code` and `sale_data`.`customer_code` = `distributor_list`.`customer_code` LEFT JOIN `sale_data_temp` ON `distributor_list`.`address_coordinates` = `sale_data_temp`.`address_coordinates` LEFT JOIN `item_master` ON `sale_data`.`item_code` = `item_master`.`item_code` WHERE `invoice_date` BETWEEN "2017-04-01" and "2017-11-01" AND `item_master`.`id_category` = 1 GROUP BY `distributor_list`.`address_coordinates`
Query, rewritten with formatting.
SELECT SUM(sale_data.total_sale) as totalsale,
sale_data_temp.customer_type_cy as customer_type,
distributor_list.customer_status
FROM distributor_list
LEFT JOIN sale_data
ON sale_data.depo_code = distributor_list.depo_code
and sale_data.customer_code = distributor_list.customer_code
LEFT JOIN sale_data_temp
ON distributor_list.address_coordinates = sale_data_temp.address_coordinates
LEFT JOIN item_master
ON sale_data.item_code = item_master.item_code
WHERE invoice_date BETWEEN "2017-04-01" and "2017-11-01"
AND item_master.id_category = 1
GROUP BY distributor_list.address_coordinates
DESC-
This Query is taking 7.5 seconds to run. My application contains 3-4 such queries. Therefore loading time appraches 1 min on server.
My sale data table contains 450K records.
Distributor list contains 970 records
Item master contains 7774 records and sale_data_temp contains 324 records.
I am using indexing but it is not being used for sale data table.
All the 400K records are searched as is evident from explain sql.
If I reduce the duration of BETWEEN clause than sale data table uses date index otherwise it scans all 400K rows.
The rows between 01-04-2017 and 01-11-2017 are 84000 but still it scans 400K rows.
MYSQL EXPLAIN-
I have modified queries two times with no success.
Modification 1:
SELECT SUM(sale_data.total_sale) as totalsale, `sale_data_temp`.`customer_type_cy` as `customer_type`, `distributor_list`.`customer_status` FROM `distributor_list` LEFT JOIN `sale_data` ON `sale_data`.`depo_code` = `distributor_list`.`depo_code` and `sale_data`.`customer_code` = `distributor_list`.`customer_code` AND `invoice_date` BETWEEN "2017-04-01" and "2017-11-01" LEFT JOIN `sale_data_temp` ON `distributor_list`.`address_coordinates` = `sale_data_temp`.`address_coordinates` LEFT JOIN `item_master` ON `sale_data`.`item_code` = `item_master`.`item_code` WHERE `item_master`.`id_category` = 1 GROUP BY `distributor_list`.`address_coordinates`
Modification 2
SELECT SQL_NO_CACHE SUM( sd.total_sale ) AS totalsale, `sale_data_temp`.`customer_type_cy` AS `customer_type` , `distributor_list`.`customer_status` FROM `distributor_list` LEFT JOIN (SELECT * FROM `sale_data` WHERE `invoice_date` BETWEEN "2017-04-01" AND "2017-11-01")sd ON `sd`.`depo_code` = `distributor_list`.`depo_code` AND `sd`.`customer_code` = `distributor_list`.`customer_code` LEFT JOIN `sale_data_temp` ON `distributor_list`.`address_coordinates` = `sale_data_temp`.`address_coordinates` LEFT JOIN `item_master` ON `sd`.`item_code` = `item_master`.`item_code` WHERE `item_master`.`id_category` =1 GROUP BY `distributor_list`.`address_coordinates`
HERE ARE MY INDEXES ON SALE DATA TABLE
See the key column of the EXPLAIN results view - no key is being used at the moment so MySQL is not using any of your indexes for filtering out rows so it is scanning the whole table on each query. This is why it is taking so long.
I have taken a look at your first query with relation to your sale_data indices. It looks like you will need to create a new composite index on this table that contains the following columns only:
depo_code, customer_code, item_code, invoice_date, total_sale
I recommend that you name this index test1 and experiment with modifying the ordering of the columns and keep testing again each time using EXPLAIN EXTENDED until you achieve a selected key - you want to see index test1 has been selected in the key column.
See this answer that has helped me before with this, and it will help you understand the importance of correctly ordering your composite indices.
Looking at the cardinality of the single field indices, here is my best attempt at giving you the correct index to apply:
ALTER TABLE `sale_data` ADD INDEX `test1` (`item_code`, `customer_code`, `invoice_date`, `depo_code`, `total_sale`);
Good luck with your mission!
A few things to notice about your query.
You are misusing the notorious MySQL extension to GROUP BY. Read this, then mention the same columns in your GROUP BY clause as you mention in your SELECT clause.
Your LEFT JOIN sale_data and LEFT JOIN item_master operations are actually ordinary JOIN operations. Why? You mention columns from those tables in your WHERE clause.
Your best bet for speedup is doing a date-range scan on an index on sale_data.invoice_date. For some reason known only to the MySQL query planner's feverish machinations, you're not getting it.
Try refactoring your query. Here's one suggestion:
SELECT SUM(sale_data.total_sale) as totalsale,
sale_data_temp.customer_type_cy as customer_type,
distributor_list.customer_status
FROM distributor_list
JOIN sale_data
ON sale_data.invoice_date BETWEEN "2017-04-01" and "2017-11-01"
and sale_data.depo_code = distributor_list.depo_code
and sale_data.customer_code = distributor_list.customer_code
LEFT JOIN sale_data_temp
ON distributor_list.address_coordinates = sale_data_temp.address_coordinates
JOIN item_master
ON sale_data.item_code = item_master.item_code
WHERE item_master.id_category = 1
GROUP BY sale_data_temp.customer_type_cy, distributor_list.customer_status
Try creating a covering index on sale_data for this query. You'll have to mess around a bit to get this right, but this is a starting point. (invoice_date, item_code, depo_code, customer_code, total_sale). The point of a covering index is to allow the query to be satisfied entirely from the index without having to refer back to the table's data. That's why I included total_sale in the index.
Please notice that index I suggested makes your index on invoice_date redundant. You can drop that index.

MySQL Statement with multi nested JOINs and Distinct Limited Ordering

I'm attempting to build a list of results based on three joins
I have created a table of leads, as my sales team takes action on the leads they attach event note records to the leads. 1 lead can have many notes. each note has a timestamp and also a date/time field where they can set a future date in order to schedule call backs and appointments.
I have no trouble building the list, with all my leads associated with their respective event notes, but what I want to do in this particular case is query a smaller list of leads that are associated with only the event note containing the "newest"/highest value in the date_time column.
I've been digging about especially here on stack for the last couple days attempting to get the desired result from my statements. I get either all of the lead records with all of their associated event note records or I get 1, no matter what I utilize ( GROUP BY date_time ASC LIMIT 1) or (ORDER BY date_time ASC LIMIT 1) I've even tried to build a view with only the highest scheduled record for each lead.id.
SELECT
rr_leads.id AS 'Lead',
rr_leads.first,
rr_leads.last,
rr_leads.company,
rr_leads.phone,
rr_leads.email,
rr_leads.city,
rr_leads.zip,
rr_leads.status,
z.noteid,
z.taskid,
z.scheduled,
z.event
FROM rr_leads
LEFT JOIN
(
SELECT
rr_lead_notes.lead_id,
rr_lead_notes.id AS 'noteid',
rr_lead_tasks.id AS 'taskid',
rr_lead_notes.date_time AS 'scheduled',
rr_lead_notes.task_note,
rr_lead_tasks.task_step AS 'event'
FROM rr_lead_notes
LEFT JOIN rr_lead_tasks
ON rr_lead_notes.task_note = rr_lead_tasks.task_step
AND rr_lead_notes.id IS NOT NULL
AND rr_lead_notes.task_note IS NOT NULL
GROUP BY rr_lead_notes.id DESC
) z
ON rr_leads.id = z.lead_id
WHERE rr_leads.id IS NOT NULL
AND z.noteid IS NOT NULL
ORDER BY rr_leads.id DESC
Here is the general idea of getting data associated with a most recent event. You can adjust for your particular situation.
select yourfields
from table1 join othertables etc
join
(select id, max(time_stamp) maxts
from table1
where whatever
group by id) temp on table1.id = temp.id
and table1.time_stamp = maxts
where whatever
Make sure the where clauses in your main query and subquery are the same.

Query efficiency (multiple selects)

I have two tables - one called customer_records and another called customer_actions.
customer_records has the following schema:
CustomerID (auto increment, primary key)
CustomerName
...etc...
customer_actions has the following schema:
ActionID (auto increment, primary key)
CustomerID (relates to customer_records)
ActionType
ActionTime (UNIX time stamp that the entry was made)
Note (TEXT type)
Every time a user carries out an action on a customer record, an entry is made in customer_actions, and the user is given the opportunity to enter a note. ActionType can be one of a few values (like 'designatory update' or 'added case info' - can only be one of a list of options).
What I want to be able to do is display a list of records from customer_records where the last ActionType was a certain value.
So far, I've searched the net/SO and come up with this monster:
SELECT * FROM (
SELECT * FROM (
SELECT * FROM `customer_actions` ORDER BY `EntryID` DESC
) list1 GROUP BY `CustomerID`
) list2 WHERE `ActionType`='whatever' LIMIT 0,30
Which is great - it lists each customer ID and their last action. But the query is extremely slow on occasions (note: there are nearly 20,000 records in customer_records). Can anyone offer any tips on how I can sort this monster of a query out or adjust my table to give faster results? I'm using MySQL. Any help is really appreciated, thanks.
Edit: To be clear, I need to see a list of customers who's last action was 'whatever'.
To filter customers by their last action, you could use a correlated sub-query...
SELECT
*
FROM
customer_records
INNER JOIN
customer_actions
ON customer_actions.CustomerID = customer_records.CustomerID
AND customer_actions.ActionDate = (
SELECT
MAX(ActionDate)
FROM
customer_actions AS lookup
WHERE
CustomerID = customer_records.CustomerID
)
WHERE
customer_actions.ActionType = 'Whatever'
You may find it more efficient to avoid the correlated sub-query as follows...
SELECT
*
FROM
customer_records
INNER JOIN
(SELECT CustomerID, MAX(ActionDate) AS ActionDate FROM customer_actions GROUP BY CustomerID) AS last_action
ON customer_records.CustomerID = last_action.CustomerID
INNER JOIN
customer_actions
ON customer_actions.CustomerID = last_action.CustomerID
AND customer_actions.ActionDate = last_action.ActionDate
WHERE
customer_actions.ActionType = 'Whatever'
I'm not sure if I understand the requirements but it looks to me like a JOIN would be enough for that.
SELECT cr.CustomerID, cr.CustomerName, ...
FROM customer_records cr
INNER JOIN customer_actions ca ON ca.CustomerID = cr.CustomerID
WHERE `ActionType` = 'whatever'
ORDER BY
ca.EntryID
Note that 20.000 records should not pose a performance problem
Please note that I've adapted Lieven's answer (I made a separate post as this was too long for a comment). Any credit for the solution itself goes to him, I'm just trying to show you some key points for improving performance.
If speed is a concern then the following should give you some suggestions for improving it:
select top 100 -- Change as required
cr.CustomerID ,
cr.CustomerName,
cr.MoreDetail1,
cr.Etc
from customer_records cr
inner join customer_actions ca
on ca.CustomerID = cr.CustomerID
where ca.ActionType = 'x'
order by cr.CustomerID
A few notes:
In some cases I find left outer joins to be faster then inner joins - It would be worth measuring performance for both for this query
Avoid returning * wherever possible
You don't have to reference 'cr.x' in the initial select but it's a good habit to get into for when you start working on large queries that can have multiple joins in them (this will make a lot of sense once you start doing this
When using joins always join on a primary key
Maybe I'm missing something but what's wrong with a simple join and a where clause?
Select ActionType, ActionTime, Note
FROM Customer_Records CR
INNER JOIN customer_Actions CA
ON CR.CustomerID = CA.CustomerID
Where ActionType = 'added case info'