mysql query "GROUP BY" giving random results - mysql

I was here once before and was assisted tremendously:
mysql group by returning incorrect result
I am now stuck with a similar Query which has an added table:
here is the Fiddle http://sqlfiddle.com/#!2/7fe40/5
There are 3 tables: Project, Task and Timesheet
The Task table has a foreign key (project_id) which links back to the Parent Project, it also holds a value for "assigned hours".
The timesheet table holds the "actual_hours" and the users name.
what I am trying to do is SUM how many actual hours have been spent on each project.
this query returns incorrect and random results:
SELECT DISTINCT
`timesheet`.`name`,
SUM(`timesheet`.`hours`) AS `total_hours`,
`project`.`project_name`,
`task`.`task_name`,
SUM(`task`.`hrs`) AS `assigned_hours`
FROM
`task`
INNER JOIN `project` ON (`task`.`project_id` = `project`.`project_id`)
INNER JOIN `timesheet` ON (`task`.`task_id` = `timesheet`.`task_id`)
GROUP BY
`project`.`project_name`
Any help greatly appreciated.
Thanks

Mysql has support for non-standard grouping. The fix is to group by all non-aggregate selected columns:
...
GROUP BY `timesheet`.`name`, `project`.`project`, `task`.`task_name`
You should also remove the DISTINCT keyword.
In mysql only (all other databases will raise an exception), grouping by fewer than all non-aggregated columns results in a random row for each unique combination of those columns that are grouped by (although in practice, it isn't a random row but the first row encountered - YMMV is you rely on this).

If you just want the sum for the projects, why are you including the tasks?
SELECT SUM(ts.`hours`) AS `total_hours`,
p.`project_name`,
SUM(t.`hrs`) AS `assigned_hours`
FROM `task` t INNER JOIN
`project` p
ON t.`project_id` = p.`project_id`) INNER JOIN
`timesheet` ts
ON t.`task_id` = ts.`task_id`
GROUP BY p.`project_name`;
I have a feeling though that this might give inaccurate results because there may be multiple timesheet records for each task. So you can try:
SELECT SUM(ts.total_hours) AS `total_hours`,
p.`project_name`,
SUM(t.`hrs`) AS `assigned_hours`
FROM `task` t INNER JOIN
`project` p
ON t.`project_id` = p.`project_id` INNER JOIN
(select ts.task_id, sum(ts.hours) as total_hours
from `timesheet` ts
group by ts.task_id
) ts
ON t.`task_id` = ts.`task_id`
GROUP BY p.`project_name`;
Here is a SQL Fiddle.

Related

SQL Temporary Table or Select

I've got a problem with MySQL select statement.
I have a table with different Department and statuses, there are 4 statuses for every department, but for each month there are not always every single status but I would like to show it in the analytics graph that there is '0'.
I have a problem with select statement that it shows only existing statuses ( of course :D ).
Is it possible to create temporary table with all of the Departments , Statuses and amount of statuses as 0, then update it by values from other select?
Select statement and screen how it looks in perfect situation, and how it looks in bad situation :
SELECT utd.Departament,uts.statusDef as statusoforder,Count(uts.statusDef) as Ilosc_Statusow
FROM ur_tasks_details utd
INNER JOIN ur_tasks_status uts on utd.StatusOfOrder = uts.statusNR
WHERE month = 'Sierpien'
GROUP BY uts.statusDef,utd.Departament
Perfect scenario, now bad scenario :
I've tried with "union" statements but i don't know if there is a possibility to take only "the highest value" for every department.
example :
I've also heard about
With CTE tables, but I don't really get how to use it. Would love to get some tips on it!
Thanks for your help.
Use a cross join to generate the rows you want. Then use a left join and aggregation to bring in the data:
select d.Departament, uts.statusDef as statusoforder,
Count(uts.statusDef) as Ilosc_Statusow
from (select distinct utd.Departament
from ur_tasks_details utd
) d cross join
ur_tasks_status uts left join
ur_tasks_details utd
on utd.Departament = d.Departament and
utd.StatusOfOrder = uts.statusNR and
utd.month = 'Sierpien'
group by uts.statusDef, d.Departament;
The first subquery should be your source of all the departments.
I also suspect that month is in the details table, so that should be part of the on clause.

Query with multiple table joins taking too much time despite indexing

Query-
SELECT SUM(sale_data.total_sale) as totalsale, `sale_data_temp`.`customer_type_cy` as `customer_type`, `distributor_list`.`customer_status` FROM `distributor_list` LEFT JOIN `sale_data` ON `sale_data`.`depo_code` = `distributor_list`.`depo_code` and `sale_data`.`customer_code` = `distributor_list`.`customer_code` LEFT JOIN `sale_data_temp` ON `distributor_list`.`address_coordinates` = `sale_data_temp`.`address_coordinates` LEFT JOIN `item_master` ON `sale_data`.`item_code` = `item_master`.`item_code` WHERE `invoice_date` BETWEEN "2017-04-01" and "2017-11-01" AND `item_master`.`id_category` = 1 GROUP BY `distributor_list`.`address_coordinates`
Query, rewritten with formatting.
SELECT SUM(sale_data.total_sale) as totalsale,
sale_data_temp.customer_type_cy as customer_type,
distributor_list.customer_status
FROM distributor_list
LEFT JOIN sale_data
ON sale_data.depo_code = distributor_list.depo_code
and sale_data.customer_code = distributor_list.customer_code
LEFT JOIN sale_data_temp
ON distributor_list.address_coordinates = sale_data_temp.address_coordinates
LEFT JOIN item_master
ON sale_data.item_code = item_master.item_code
WHERE invoice_date BETWEEN "2017-04-01" and "2017-11-01"
AND item_master.id_category = 1
GROUP BY distributor_list.address_coordinates
DESC-
This Query is taking 7.5 seconds to run. My application contains 3-4 such queries. Therefore loading time appraches 1 min on server.
My sale data table contains 450K records.
Distributor list contains 970 records
Item master contains 7774 records and sale_data_temp contains 324 records.
I am using indexing but it is not being used for sale data table.
All the 400K records are searched as is evident from explain sql.
If I reduce the duration of BETWEEN clause than sale data table uses date index otherwise it scans all 400K rows.
The rows between 01-04-2017 and 01-11-2017 are 84000 but still it scans 400K rows.
MYSQL EXPLAIN-
I have modified queries two times with no success.
Modification 1:
SELECT SUM(sale_data.total_sale) as totalsale, `sale_data_temp`.`customer_type_cy` as `customer_type`, `distributor_list`.`customer_status` FROM `distributor_list` LEFT JOIN `sale_data` ON `sale_data`.`depo_code` = `distributor_list`.`depo_code` and `sale_data`.`customer_code` = `distributor_list`.`customer_code` AND `invoice_date` BETWEEN "2017-04-01" and "2017-11-01" LEFT JOIN `sale_data_temp` ON `distributor_list`.`address_coordinates` = `sale_data_temp`.`address_coordinates` LEFT JOIN `item_master` ON `sale_data`.`item_code` = `item_master`.`item_code` WHERE `item_master`.`id_category` = 1 GROUP BY `distributor_list`.`address_coordinates`
Modification 2
SELECT SQL_NO_CACHE SUM( sd.total_sale ) AS totalsale, `sale_data_temp`.`customer_type_cy` AS `customer_type` , `distributor_list`.`customer_status` FROM `distributor_list` LEFT JOIN (SELECT * FROM `sale_data` WHERE `invoice_date` BETWEEN "2017-04-01" AND "2017-11-01")sd ON `sd`.`depo_code` = `distributor_list`.`depo_code` AND `sd`.`customer_code` = `distributor_list`.`customer_code` LEFT JOIN `sale_data_temp` ON `distributor_list`.`address_coordinates` = `sale_data_temp`.`address_coordinates` LEFT JOIN `item_master` ON `sd`.`item_code` = `item_master`.`item_code` WHERE `item_master`.`id_category` =1 GROUP BY `distributor_list`.`address_coordinates`
HERE ARE MY INDEXES ON SALE DATA TABLE
See the key column of the EXPLAIN results view - no key is being used at the moment so MySQL is not using any of your indexes for filtering out rows so it is scanning the whole table on each query. This is why it is taking so long.
I have taken a look at your first query with relation to your sale_data indices. It looks like you will need to create a new composite index on this table that contains the following columns only:
depo_code, customer_code, item_code, invoice_date, total_sale
I recommend that you name this index test1 and experiment with modifying the ordering of the columns and keep testing again each time using EXPLAIN EXTENDED until you achieve a selected key - you want to see index test1 has been selected in the key column.
See this answer that has helped me before with this, and it will help you understand the importance of correctly ordering your composite indices.
Looking at the cardinality of the single field indices, here is my best attempt at giving you the correct index to apply:
ALTER TABLE `sale_data` ADD INDEX `test1` (`item_code`, `customer_code`, `invoice_date`, `depo_code`, `total_sale`);
Good luck with your mission!
A few things to notice about your query.
You are misusing the notorious MySQL extension to GROUP BY. Read this, then mention the same columns in your GROUP BY clause as you mention in your SELECT clause.
Your LEFT JOIN sale_data and LEFT JOIN item_master operations are actually ordinary JOIN operations. Why? You mention columns from those tables in your WHERE clause.
Your best bet for speedup is doing a date-range scan on an index on sale_data.invoice_date. For some reason known only to the MySQL query planner's feverish machinations, you're not getting it.
Try refactoring your query. Here's one suggestion:
SELECT SUM(sale_data.total_sale) as totalsale,
sale_data_temp.customer_type_cy as customer_type,
distributor_list.customer_status
FROM distributor_list
JOIN sale_data
ON sale_data.invoice_date BETWEEN "2017-04-01" and "2017-11-01"
and sale_data.depo_code = distributor_list.depo_code
and sale_data.customer_code = distributor_list.customer_code
LEFT JOIN sale_data_temp
ON distributor_list.address_coordinates = sale_data_temp.address_coordinates
JOIN item_master
ON sale_data.item_code = item_master.item_code
WHERE item_master.id_category = 1
GROUP BY sale_data_temp.customer_type_cy, distributor_list.customer_status
Try creating a covering index on sale_data for this query. You'll have to mess around a bit to get this right, but this is a starting point. (invoice_date, item_code, depo_code, customer_code, total_sale). The point of a covering index is to allow the query to be satisfied entirely from the index without having to refer back to the table's data. That's why I included total_sale in the index.
Please notice that index I suggested makes your index on invoice_date redundant. You can drop that index.

MYSQL many to many 3 tables query

EDIT. I missed the one main issue I was having. I want to display all the unique 'device_MAC' rows. So I want this query to output 3 rows (as per the original query). The issue I am having is connecting the data table to the remote_node table via dt_short = rn_short where the maximum timestamp for dt_short in the data table.
I am having trouble running a query on 3 tables (2 have many to many relations).
What I am trying to do:
Get each distinct rn_IEEE from the remotenodes table with the maximum timestamp (in the example this will get 3 rows with 3 distinct short addresses rn_short)
Join with the devicenames table on device_IEEE
Get each distinct dt_short from the data table with the maximum timestamp
Join dt_short with rn_short from the query above
Now the problem I am running into is that I can do the queries for the above individually, I have even gotten the first 3 of them together into a query but I cannot seem to properly join the last bit of data to get the result that I want.
I have been going in circles trying to solve this. Here is a link to SQL Fiddle which contains all the test data and the query as far as I got it, it does what i want for the first line but from table 'data' after the first line is NULL:
See this SQL fiddle
After going through your requirements and the data, it looks like you just need to change your query to include an INNER JOIN on the data table instead of a LEFT JOIN
See SQL Fiddle with Demo
select rn.*, dn.*, d.*
from remotenodes rn
inner join devicenames dn
on rn.rn_IEEE = dn.device_IEEE
and rn.rn_timestamp = (SELECT MAX(rn_timestamp) FROM remotenodes
WHERE rn.rn_IEEE = rn_IEEE
GROUP BY rn_IEEE)
inner join data d
on rn.rn_short = d.dt_short
AND d.dt_timestamp = (SELECT MAX(d2.dt_timestamp) AS ts
FROM data d2
WHERE d.dt_short = d2.dt_short
GROUP BY d2.dt_short)
what you have done the query in your SQL fiddle is right.Instead of using left join use inner join so that it will give you the first row
cheers.
Thanks for all your answers everyone. I managed to solve the problem by using views.
It's not the most efficient way but I think it will do for now.
Here is the SQL Fiddle link:
http://sqlfiddle.com/#!2/4076e/8
Try this query, for me its returning one row:
SELECT rn_short, rn_IEEE, device_name
FROM
(SELECT DISTINCTROW dt_short FROM (SELECT * FROM `data` ORDER BY `dt_timestamp` DESC) as data ) as a
JOIN
(SELECT rn_IEEE, rn_short, device_name FROM devicenames dn JOIN (SELECT DISTINCTROW rn_IEEE, rn_short FROM (SELECT * FROM `remotenodes` ORDER BY `rn_timestamp` DESC) as remotenodes GROUP BY rn_IEEE) as rn ON dn.device_IEEE = rn.rn_IEEE) as b
ON a.dt_short = b.rn_short

Query efficiency (multiple selects)

I have two tables - one called customer_records and another called customer_actions.
customer_records has the following schema:
CustomerID (auto increment, primary key)
CustomerName
...etc...
customer_actions has the following schema:
ActionID (auto increment, primary key)
CustomerID (relates to customer_records)
ActionType
ActionTime (UNIX time stamp that the entry was made)
Note (TEXT type)
Every time a user carries out an action on a customer record, an entry is made in customer_actions, and the user is given the opportunity to enter a note. ActionType can be one of a few values (like 'designatory update' or 'added case info' - can only be one of a list of options).
What I want to be able to do is display a list of records from customer_records where the last ActionType was a certain value.
So far, I've searched the net/SO and come up with this monster:
SELECT * FROM (
SELECT * FROM (
SELECT * FROM `customer_actions` ORDER BY `EntryID` DESC
) list1 GROUP BY `CustomerID`
) list2 WHERE `ActionType`='whatever' LIMIT 0,30
Which is great - it lists each customer ID and their last action. But the query is extremely slow on occasions (note: there are nearly 20,000 records in customer_records). Can anyone offer any tips on how I can sort this monster of a query out or adjust my table to give faster results? I'm using MySQL. Any help is really appreciated, thanks.
Edit: To be clear, I need to see a list of customers who's last action was 'whatever'.
To filter customers by their last action, you could use a correlated sub-query...
SELECT
*
FROM
customer_records
INNER JOIN
customer_actions
ON customer_actions.CustomerID = customer_records.CustomerID
AND customer_actions.ActionDate = (
SELECT
MAX(ActionDate)
FROM
customer_actions AS lookup
WHERE
CustomerID = customer_records.CustomerID
)
WHERE
customer_actions.ActionType = 'Whatever'
You may find it more efficient to avoid the correlated sub-query as follows...
SELECT
*
FROM
customer_records
INNER JOIN
(SELECT CustomerID, MAX(ActionDate) AS ActionDate FROM customer_actions GROUP BY CustomerID) AS last_action
ON customer_records.CustomerID = last_action.CustomerID
INNER JOIN
customer_actions
ON customer_actions.CustomerID = last_action.CustomerID
AND customer_actions.ActionDate = last_action.ActionDate
WHERE
customer_actions.ActionType = 'Whatever'
I'm not sure if I understand the requirements but it looks to me like a JOIN would be enough for that.
SELECT cr.CustomerID, cr.CustomerName, ...
FROM customer_records cr
INNER JOIN customer_actions ca ON ca.CustomerID = cr.CustomerID
WHERE `ActionType` = 'whatever'
ORDER BY
ca.EntryID
Note that 20.000 records should not pose a performance problem
Please note that I've adapted Lieven's answer (I made a separate post as this was too long for a comment). Any credit for the solution itself goes to him, I'm just trying to show you some key points for improving performance.
If speed is a concern then the following should give you some suggestions for improving it:
select top 100 -- Change as required
cr.CustomerID ,
cr.CustomerName,
cr.MoreDetail1,
cr.Etc
from customer_records cr
inner join customer_actions ca
on ca.CustomerID = cr.CustomerID
where ca.ActionType = 'x'
order by cr.CustomerID
A few notes:
In some cases I find left outer joins to be faster then inner joins - It would be worth measuring performance for both for this query
Avoid returning * wherever possible
You don't have to reference 'cr.x' in the initial select but it's a good habit to get into for when you start working on large queries that can have multiple joins in them (this will make a lot of sense once you start doing this
When using joins always join on a primary key
Maybe I'm missing something but what's wrong with a simple join and a where clause?
Select ActionType, ActionTime, Note
FROM Customer_Records CR
INNER JOIN customer_Actions CA
ON CR.CustomerID = CA.CustomerID
Where ActionType = 'added case info'

MySQL JOIN tables with WHERE clause

I need to gather posts from two mysql tables that have different columns and provide a WHERE clause to each set of tables. I appreciate the help, thanks in advance.
This is what I have tried...
SELECT
blabbing.id,
blabbing.mem_id,
blabbing.the_blab,
blabbing.blab_date,
blabbing.blab_type,
blabbing.device,
blabbing.fromid,
team_blabbing.team_id
FROM
blabbing
LEFT OUTER JOIN
team_blabbing
ON team_blabbing.id = blabbing.id
WHERE
team_id IN ($team_array) ||
mem_id='$id' ||
fromid='$logOptions_id'
ORDER BY
blab_date DESC
LIMIT 20
I know that this is messy, but i'll admit, I am no mysql veteran. I'm a beginner at best... Any suggestions?
You could put the where-clauses in subqueries:
select
*
from
(select * from ... where ...) as alias1 -- this is a subquery
left outer join
(select * from ... where ...) as alias2 -- this is also a subquery
on
....
order by
....
Note that you can't use subqueries like this in a view definition.
You could also combine the where-clauses, as in your example. Use table aliases to distinguish between columns of different tables (it's a good idea to use aliases even when you don't have to, just because it makes things easier to read). Example:
select
*
from
<table> as alias1
left outer join
<othertable> as alias2
on
....
where
alias1.id = ... and alias2.id = ... -- aliases distinguish between ids!!
order by
....
Two suggestions for you since a relative newbie in SQL. Use "aliases" for your tables to help reduce SuperLongTableNameReferencesForColumns, and always qualify the column names in a query. It can help your life go easier, and anyone AFTER you to better know which columns come from what table, especially if same column name in different tables. Prevents ambiguity in the query. Your left join, I think, from the sample, may be ambigous, but confirm the join of B.ID to TB.ID? Typically a "Team_ID" would appear once in a teams table, and each blabbing entry could have the "Team_ID" that such posting was from, in addition to its OWN "ID" for the blabbing table's unique key indicator.
SELECT
B.id,
B.mem_id,
B.the_blab,
B.blab_date,
B.blab_type,
B.device,
B.fromid,
TB.team_id
FROM
blabbing B
LEFT JOIN team_blabbing TB
ON B.ID = TB.ID
WHERE
TB.Team_ID IN ( you can't do a direct $team_array here )
OR B.mem_id = SomeParameter
OR b.FromID = AnotherParameter
ORDER BY
B.blab_date DESC
LIMIT 20
Where you were trying the $team_array, you would have to build out the full list as expected, such as
TB.Team_ID IN ( 1, 4, 18, 23, 58 )
Also, not logical "||" or, but SQL "OR"
EDIT -- per your comment
This could be done in a variety of ways, such as dynamic SQL building and executing, calling multiple times, once for each ID and merging the results, or additionally, by doing a join to yet another temp table that gets cleaned out say... daily.
If you have another table such as "TeamJoins", and it has say... 3 columns: a date, a sessionid and team_id, you could daily purge anything from a day old of queries, and/or keep clearing each time a new query by the same session ID (as it appears coming from PHP). Have two indexes, one on the date (to simplify any daily purging), and second on (sessionID, team_id) for the join.
Then, loop through to do inserts into the "TempJoins" table with the simple elements identified.
THEN, instead of a hard-coded list IN, you could change that part to
...
FROM
blabbing B
LEFT JOIN team_blabbing TB
ON B.ID = TB.ID
LEFT JOIN TeamJoins TJ
on TB.Team_ID = TJ.Team_ID
WHERE
TB.Team_ID IN NOT NULL
OR B.mem_id ... rest of query
What I ended up doing is;
I added an extra column to my blabbing table called team_id and set it to null as well as another field in my team_blabbing table called mem_id
Then I changed the insert script to also insert a value to the mem_id in team_blabbing.
After doing this I did a simple UNION ALL in the query:
SELECT
*
FROM
blabbing
WHERE
mem_id='$id' OR
fromid='$logOptions_id'
UNION ALL
SELECT
*
FROM
team_blabbing
WHERE
team_id
IN
($team_array)
ORDER BY
blab_date DESC
LIMIT 20
I am open to any thought on what I did. Try not to be too harsh though:) Thanks again for all the info.