MySQL structuring a query - mysql

I've been trying to structure a massive query, and I have succeeded and been able to actually finish the query. However I went from my dev environment (small database) to testing on the live environment (big database), and I've ran into performance problems.
I think the answer can be found here: https://dba.stackexchange.com/a/16376
But is there really no other way around? The reason I am even putting the subqueries in a VIEW is because they have more complex constructs.
Example of the VIEWS / queries:
pjl view:
(SELECT `pj`.`id` AS `id`,`pj`.`globalId` AS `globalId`,`pj`.`date` AS `date`,`pj`.`serverId` AS `serverId`,`pj`.`playerId` AS `playerId`,'playerjoins' AS `origin`
FROM `playerjoins` `pj`)
UNION ALL
(SELECT `pl`.`id` AS `id`,`pl`.`globalId` AS `globalId`,`pl`.`date` AS `date`,`pl`.`serverId` AS `serverId`,`pl`.`playerId` AS `playerId`,'playerleaves' AS `origin`
FROM `playerleaves` `pl`)
ll_below view:
SELECT `ll`.`id` AS `id`,`ll`.`globalId` AS `globalId`,`ll`.`date` AS `date`,`ll`.`serverId` AS `serverId`,`ll`.`gamemodeId` AS `gamemodeId`,`ll`.`mapId` AS `mapId`,`pjl`.`origin` AS `origin`,`pjl`.`date` AS `pjldate`,`pjl`.`playerId` AS `playerId`
FROM `pjl`
JOIN `levelsloaded` `ll`
ON `pjl`.`date` <= `ll`.`date`
the, now simple, query:
SELECT * FROM
(
(SELECT * FROM ll_below WHERE playerId = 976) llbelow
INNER JOIN
(SELECT id, MAX(pjldate) AS maxdate FROM ll_below WHERE playerId = 976 GROUP BY id) llbelow_inner
ON llbelow.id = llbelow_inner.id AND llbelow.pjldate = llbelow_inner.maxdate
)
WHERE origin = 'playerjoins'
ORDER BY date DESC
I could put everything in one big query, but in my eyes it gets a big mess then.
I also know why the performance is being hit so hard, because MySQL cannot use the MERGE algorithm for the pjl view as there is an UNION ALL in it. If I put the WHERE playerId = 976 clauses in the correct places, then the performance hit is gone, but I'd also have a query consisting of 50 lines or something.
Can someone please suggest me what to do if I want performance ánd a query that is still concise?

This clause:
WHERE origin = 'playerjoins'
Means that you didn't need to do a UNION at all, since you're not using any of the rows from pl by the end of the query.
You're right that the view is likely forcing a temporary table instead of using the merge algorithm.
UNION ALL also creates its own temporary table. This case is optimized in MySQL 5.7.3 (still pre-alpha as of this writing), according to Bug #50674 Do not create temporary tables for UNION ALL.
Also, the GROUP BY is probably creating a third level of temporary table.
I see you're also doing a greatest-n-per-group operation, to match the rows with the max date per id. There are different solutions for this type of operation, which don't use a subquery. See my answers for example:
Retrieving the last record in each group
Fetch the row which has the Max value for a column
Depending on the number of rows and other conditions, I've seen both solutions for greatest-n-per-group queries give better performance. So you should test both solutions and see which is better given the state and size of your data.
I think you should unravel the views and unions and subqueries. See if you can apply the various WHERE conditions (like playerId=976) directly against the base tables before doing joins and aggregates. That should greatly reduce the number of examined rows, and avoid the multiple layers of temp tables caused by the view and union and group by.
Re your comment:
The query you seem to want is the most recent join per level for one specific player.
Something like this:
SELECT ll.id,
ll.globalId,
ll.date AS leveldate,
ll.serverId,
ll.gamemodeId,
ll.mapId,
pj.date AS joindate,
pj.playerId
FROM levelsloaded AS ll
INNER JOIN playerjoins AS pj
ON pj.date <= ll.date
LEFT OUTER JOIN playerjoins AS pj2
ON pj.playerId = pj2.playerId AND pj2.date <= ll.date AND pj.date < pj2.date
WHERE pj.playerId = 976
AND pj2.playerID IS NULL
ORDER BY joindate DESC
(I have not tested this query, but it should get you started.)

Bill is absolutely correct... your views don't even really provide any benefit. I've tried to build something for you, but my interpretation may not be exactly correct. Start by asking yourself IN SIMPLE WORDS what am I trying to get. Here is what I came up with.
I'm looking for a single player (hence your player ID = 976). I'm also only considering the PLAYERJOINS instance (not the player leaving which knocks out the union part). For this player, I want the most recent date they joined a game. From that date as the baseline, I want all Levels Loaded that were created at or after the maximum date joined.
So, the first query is nothing but the maximum date for player 976 from the playerJoined table. Who cares about anything else, or any other user. The ID Here is the same as it would be in the LevelsLoaded table via the join, so getting that player ID and the same levelsLoaded ID for the same person is, IMO, pointless. Then, get the rest of the details from the Levels Loaded on/after the max date for the same person, order by whatever..
If my interpretation of your query is incorrect, offer obvious clarification for adjustments.
SELECT
ll.id,
ll.globalId,
ll.`date`,
ll.serverId,
ll.gamemodeId,
ll.mapId,
'playerjoins' as origin,
playerMax.MaxDate AS pjldate
FROM
( SELECT MAX( pj.`date` ) as MaxDate
FROM playerjoins pj
where pj.id = 976 ) playerMax
JOIN levelsloaded ll
ON ll.id = 976
AND playerMax.MaxDate <= ll.`date`

Related

Query takes too long to run

I am running the below query to retrive the unique latest result based on a date field within a same table. But this query takes too much time when the table is growing. Any suggestion to improve this is welcome.
select
t2.*
from
(
select
(
select
id
from
ctc_pre_assets ti
where
ti.ctcassettag = t1.ctcassettag
order by
ti.createddate desc limit 1
) lid
from
(
select
distinct ctcassettag
from
ctc_pre_assets
) t1
) ro,
ctc_pre_assets t2
where
t2.id = ro.lid
order by
id
Our able may contain same row multiple times, but each row with different time stamp. My object is based on a single column for example assettag I want to retrieve single row for each assettag with latest timestamp.
It's simpler, and probably faster, to find the newest date for each ctcassettag and then join back to find the whole row that matches.
This does assume that no ctcassettag has multiple rows with the same createddate, in which case you can get back more than one row per ctcassettag.
SELECT
ctc_pre_assets.*
FROM
ctc_pre_assets
INNER JOIN
(
SELECT
ctcassettag,
MAX(createddate) AS createddate
FROM
ctc_pre_assets
GROUP BY
ctcassettag
)
newest
ON newest.ctcassettag = ctc_pre_assets.ctcassettag
AND newest.createddate = ctc_pre_assets.createddate
ORDER BY
ctc_pre_assets.id
EDIT: To deal with multiple rows with the same date.
You haven't actually said how to pick which row you want in the event that multiple rows are for the same ctcassettag on the same createddate. So, this solution just chooses the row with the lowest id from amongst those duplicates.
SELECT
ctc_pre_assets.*
FROM
ctc_pre_assets
WHERE
ctc_pre_assets.id
=
(
SELECT
lookup.id
FROM
ctc_pre_assets lookup
WHERE
lookup.ctcassettag = ctc_pre_assets.ctcassettag
ORDER BY
lookup.createddate DESC,
lookup.id ASC
LIMIT
1
)
This does still use a correlated sub-query, which is slower than a simple nested-sub-query (such as my first answer), but it does deal with the "duplicates".
You can change the rules on which row to pick by changing the ORDER BY in the correlated sub-query.
It's also very similar to your own query, but with one less join.
Nested queries are always known to take longer time than a conventional query since. Can you append 'explain' at the start of the query and put your results here? That will help us analyse the exact query/table which is taking longer to response.
Check if the table has indexes. Unindented tables are not advisable(until unless obviously required to be unindented) and are alarmingly slow in executing queries.
On the contrary, I think the best case is to avoid writing nested queries altogether. Bette, run each of the queries separately and then use the results(in array or list format) in the second query.
First some questions that you should at least ask yourself, but maybe also give us an answer to improve the accuracy of our responses:
Is your data normalized? If yes, maybe you should make an exception to avoid this brutal subquery problem
Are you using indexes? If yes, which ones, and are you using them to the fullest?
Some suggestions to improve the readability and maybe performance of the query:
- Use joins
- Use group by
- Use aggregators
Example (untested, so might not work, but should give an impression):
SELECT t2.*
FROM (
SELECT id
FROM ctc_pre_assets
GROUP BY ctcassettag
HAVING createddate = max(createddate)
ORDER BY ctcassettag DESC
) ro
INNER JOIN ctc_pre_assets t2 ON t2.id = ro.lid
ORDER BY id
Using normalization is great, but there are a few caveats where normalization causes more harm than good. This seems like a situation like this, but without your tables infront of me, I can't tell for sure.
Using distinct the way you are doing, I can't help but get the feeling you might not get all relevant results - maybe someone else can confirm or deny this?
It's not that subqueries are all bad, but they tend to create massive scaleability issues if written incorrectly. Make sure you use them the right way (google it?)
Indexes can potentially save you for a bunch of time - if you actually use them. It's not enough to set them up, you have to create queries that actually uses your indexes. Google this as well.

MySQL Selecting things where a condition on a row is met 2 or more times, but showing the two or more results

If I use GROUP BY then I will get just 1 row per group. For example
Sessions table: SessionId (other things)
Actions table: ActionId, SessionId, (other things)
With:
SELECT S.*, A.* FROM ActionList A JOIN SessionList S ON A.SessionId
=S.SessionId
WHERE 1 /*various criteria to filter*/
ORDER BY S.SessionId DESC, ActionId DESC;
Thus showing me the most recent session at the top. Now I want to look at only sessions with 2 or more actions.
If I use GROUP BY A.SessionId then I can get COUNT(ActionId) and use HAVING to look at rows only with the required count, but I wont get both rows (or more) rows, just the one.
I suspect I can do this by JOINing a table with SessionIds and the count of action IDs but I'm fairly new to joins (I could do this via a subquery any ANY).
If a view would help, I would create a view of the form:
SELECT SessionId, COUNT(*) FROM Actions GROUP BY SessionId;
Or put this in brackets and JOIN on it (but I confess I'd have to loop up 3 table joins)
What is the neatest way to do this?
Also is this where "Foreign keys" come into play? That'd probably stop the "ambiguity errors" I get if I don't qualify SessionId. I've avoided them for fear of TRIGGERs, I also didn't know about JOINs and just used subqueries until recently. I've realised it is stupid to avoid things that were added to help.
Additionally I'm quite timid with joins because I know what it does, well worst case. If I JOIN on a table with m rows, and another with n I end up with m*n rows. That could be VERY large! I'm dealing with large tables (as in: schema wont fit in RAM large) so that is quite scary. I do know MySQL optimises well (able to move stuff from HAVING to WHERE and so forth) but still!
If you want to look at sessions with two or more actions, then use a join:
select sl.*
from SessionList sl join
(select SessionId, count(*) as cnt
from Actions
group by SessionId
) a
on sl.SessionId = a.SessionId and cnt > 1;

Why does a MySQL query take anywhere from 1 millisecond to 7 seconds?

I have an SQL query(see below) that returns exactly what I need but when ran through phpMyAdmin takes anywhere from 0.0009 seconds to 0.1149 seconds and occasionally all the way up to 7.4983 seconds.
Query:
SELECT
e.id,
e.title,
e.special_flag,
CASE WHEN a.date >= '2013-03-29' THEN a.date ELSE '9999-99-99' END as date
CASE WHEN a.date >= '2013-03-29' THEN a.time ELSE '99-99-99' END as time,
cat.lastname,
FROM e_table as e
LEFT JOIN a_table as a ON (a.e_id=e.id)
LEFT JOIN c_table as c ON (e.c_id=c.id)
LEFT JOIN cat_table as cat ON (cat.id=e.cat_id)
LEFT JOIN m_table as m ON (cat.name=m.name AND cat.lastname=m.lastname)
JOIN (
SELECT DISTINCT innere.id
FROM e_table as innere
LEFT JOIN a_table as innera ON (innera.e_id=innere.id AND
innera.date >= '2013-03-29')
LEFT JOIN c_table as innerc ON (innere.c_id=innerc.id)
WHERE (
(
innera.date >= '2013-03-29' AND
innera.flag_two=1
) OR
innere.special_flag=1
) AND
innere.flag_three=1 AND
innere.flag_four=1
ORDER BY COALESCE(innera.date, '9999-99-99') ASC,
innera.time ASC,
innere.id DESC LIMIT 0, 10
) AS elist ON (e.id=elist.id)
WHERE (a.flag_two=1 OR e.special_flag) AND e.flag_three=1 AND e.flag_four=1
ORDER BY a.date ASC, a.time ASC, e.id DESC
Explain Plan:
The question is:
Which part of this query could be causing the wide range of difference in performance?
To specifically answer your question: it's not a specific part of the query that's causing the wide range of performance. That's MySQL doing what it's supposed to do - being a Relational Database Management System (RDBMS), not just a dumb SQL wrapper around comma separated files.
When you execute a query, the following things happen:
The query is compiled to a 'parametrized' query, eliminating all variables down to the pure structural SQL.
The compilation cache is checked to find whether a recent usable execution plan is found for the query.
The query is compiled into an execution plan if needed (this is what the 'EXPLAIN' shows)
For each execution plan element, the memory caches are checked whether they contain fresh and usable data, otherwise the intermediate data is assembled from master table data.
The final result is assembled by putting all the intermediate data together.
What you are seeing is that when the query costs 0.0009 seconds, the cache was fresh enough to supply all data together, and when it peaks at 7.5 seconds either something was changed in the queried tables, or other queries 'pushed' the in-memory cache data out, or the DBMS has other reasons to suspect it needs to recompile the query or fetch all data again. Probably some of the other variations have to do with used indexes still being cached freshly enough in memory or not.
Concluding this, the query is ridiculously slow, you're just sometimes lucky that caching makes it appear fast.
To solve this, I'd recommend looking into 2 things:
First and foremost - a query this size should not have a single line in its execution plan reading "No possible keys". Research how indexes work, make sure you realize the impact of MySQL's limitation of using a single index per joined table, and tweak your database so that each line of the plan has an entry under 'key'.
Secondly, review the query in itself. DBMS's are at their fastest when all they have to do is combine raw data. Using programmatic elements like CASE and COALESCE are by all means often useful, but they do force the database to evaluate more things at runtime than just take raw table data. Try to eliminate such statements, or move them to the business logic as post-processing with the retrieved data.
Finally, never forget that MySQL is actually a rather stupid DBMS. It is optimized for performance in simple data fetching queries such as most websites require. As such it is much faster than SQL Server and Oracle for most generic problems. Once you start complicating things with functions, cases, huge join or matching conditions etc., the competitors are frequently much better optimized, and have better optimization in their query compilers. As such, when MySQL starts becoming slow in a specific query, consider splitting it up in 2 or more smaller queries just so it doesn't become confused, and do some postprocessing in PHP or whatever language you are calling with. I've seen many cases where this increased performance a LOT, just by not confusing MySQL, especially in cases where subqueries were involved (as in your case). Especially the fact that your subquery is a derived table, and not just a subquery, is known to complicate stuff for MySQL beyond what it can cope with.
Lets start that both your outer and inner query are working with the "e" table WITH a minimum requirement of flag_three = 1 AND flag_four = 1 (regardless of your inner query's (( x and y ) or z) condition. Also, your outer WHERE clause has explicit reference to the a.Flag_two, but no NULL which forces your LEFT JOIN to actually become an (INNER) JOIN. Also, it appears every "e" record MUST have a category as you are looking for the "cat.lastname" and no coalesce() if none found. This makes sense at it appears to be a "lookup" table reference. As for the "m_table" and "c_table", you are not getting or doing anything with it, so they can be removed completely.
Would the following query get you the same results?
select
e1.id,
e1.Title,
e1.Special_Flag,
e1.cat_id,
coalesce( a1.date, '9999-99-99' ) ADate,
coalesce( a1.time, '99-99-99' ) ATime
cat.LastName
from
e_table e1
LEFT JOIN a_table as a1
ON e1.id = a1.e_id
AND a1.flag_two = 1
AND a1.date >= '2013-03-29'
JOIN cat_table as cat
ON e1.cat_id = cat.id
where
e1.flag_three = 1
and e1.flag_four = 1
and ( e1.special_flag = 1
OR a1.id IS NOT NULL )
order by
IF( a1.id is null, 2, 1 ),
ADate,
ATime,
e1.ID Desc
limit
0, 10
The Main WHERE clause qualifies for ONLY those that have the "three and four" flags set to 1 PLUS EITHER the ( special flag exists OR there is a valid "a" record that is on/after the given date in question).
From that, simple order by and limit.
As for getting the date and time, it appears that you only want records on/after the date to be included, otherwise ignore them (such as they are old and not applicable, you don't want to see them).
The order by, I am testing FIRST for a NULL value for the "a" ID. If so, we know they will all be forced to a date of '9999-99-99' and time of '99-99-99' and want them pushed to the bottom (hence 2), otherwise, there IS an "a" record and you want those first (hence 1). Then, sort by the date/time respectively and then the ID descending in case many within the same date/time.
Finally, to help on the indexes, I would ensure your "e" table has an index on
( id, flag_three, flag_four, special_flag ).
For the "a" table, index on
(e_id, flag_two, date)

How do I join one table onto another where userid = userid but only for that date?

I'm looking to take the total time a user worked on each batch at his workstation, the total estimated work that was completed, the amount the user was paid, and how many failures the user has had for each day this year. If I can join all of this into one query then I can use it in excel and format things nicely in pivot tables and such.
EDIT: I've realized that is only possible to do this in multiple queries so I have narrowed my scope down to this:
SELECT batch_log.userid,
batches.operation_id,
SUM(TIME_TO_SEC(ramses.batch_log.time_elapsed)),
SUM(ramses.tasks.estimated_nonrecurring + ramses.tasks.estimated_recurring),
DATE(start_time)
FROM batch_log
JOIN batches ON batch_log.batch_id=batches.id
JOIN ramses.tasks ON ramses.batch_log.batch_id=ramses.tasks.batch_id
JOIN protocase.tblusers on ramses.batch_log.userid = protocase.tblusers.userid
WHERE DATE(ramses.batch_log.start_time) > "2011-01-01"
AND protocase.tblusers.active = 1
GROUP BY userid, batches.operation_id, start_time
ORDER BY start_time, userid ASC
The cross join was causing the problem.
No, in general a Having clause is used to filter the results of your Group by - for example, only reporting those who were paid for more than 24 hours in a day (HAVING SUM(ramses.timesheet_detail.paidTime) > 24). Unless you need to perform filtering of aggregate results, you shouldn't need a having clause at all.
Most of those conditions should be moved into a where clause, or as part of the joins, for two reasons - 1) Filtering should in general be done as soon as possible, to limit the work the query needs to perform. 2) If the filtering is already done, restating it may cause the query to perform additional, unneeded work.
From what I've seen so far, it appears that you're trying to roll things up by the day - try changing the last column in the group by clause to date(ramses.batch_log.start_time), or you're grouping by (what I assume is) a timestamp.
EDIT:
About schema names - yes, you can name them in the from and join sections. Often, too, the query may be able to resolve the needed schemas based on some default search list (how or if this is set up depends on your database).
Here is how I would have reformatted the query:
SELECT tblusers.userid, operations.name AS name,
SUM(TIME_TO_SEC(batch_log.time_elapsed)) AS time_elapsed,
SUM(tasks.estimated_nonrecurring + tasks.estimated_recurring) AS total_estimated,
SUM(timesheet_detail.paidTime) as hours_paid,
DATE(start_time) as date_paid
FROM tblusers
JOIN batch_log
ON tblusers.userid = batch_log.userid
AND DATE(batch_log.start_time) >= "2011-01-01"
JOIN batches
ON batch_log.batch_id = batches.id
JOIN operations
ON operations.id = batches.operation_id
JOIN tasks
ON batches.id = tasks.batch_id
JOIN timesheet_detail
ON tblusers.userid = timesheet_detail.userid
AND batch_log.start_time = timesheet_detail.for_day
AND DATE(timesheet_detail.for_day) = DATE(start_time)
WHERE tblusers.departmentid = 8
GROUP BY tblusers.userid, name, DATE(batch_log.start_time)
ORDER BY date_paid ASC
Of particular concern is the batch_log.start_time = timesheet_detail.for_day line, which is comparing (what are implied to be) timestamps. Are these really equal? I expect that one or both of these should be wrapped in a date() function.
As for why you may be getting unexpected data - you appear to have eliminated some of your join conditions. Without knowing the exact setup and use of your database, I cannot give the exact reason for your results (or even able to say they are wrong), but I think the fact that you join to the operations table without any join condition is probably to blame - if there are 2 records in that table, it will double all of your previous results, and it looks like there may be 12. You also removed operations.name from the group by clause, which may or may not give you the results you want. I would look into the rest of your table relationships, and see if there are any further restrictions that need to be made.

How do I optimize this query?

The following query gets the info that I need. However, I noticed that as the tables grow, my code gets slower and slower. I'm guessing it is this query. Can this written a different way to make it more efficient? I've heard a lot about using joins instead of subqueries, however, I don't "get" how to do it.
SELECT * FROM
(SELECT MAX(T.id) AS MAXid
FROM transactions AS T
GROUP BY T.position
ORDER BY T.position) AS result1,
(SELECT T.id AS id, T.symbol, T.t_type, T.degree, T.position, T.shares, T.price, T.completed, T.t_date,
DATEDIFF(CURRENT_DATE, T.t_date) AS days_past,
IFNULL(SUM(S.shares), 0) AS subtrans_shares,
T.shares - IFNULL(SUM(S.shares),0) AS due_shares,
(SELECT IFNULL(SUM(IF(SO.t_type = 'sell', -SO.shares, SO.shares )), 0)
FROM subtransactions AS SO WHERE SO.symbol = T.symbol) AS owned_shares
FROM transactions AS T
LEFT OUTER JOIN subtransactions AS S
ON T.id = S.transid
GROUP BY T.id
ORDER BY T.position) AS result2
WHERE MAXid = id
Your code:
(SELECT MAX(T.id) AS MAXid
FROM transactions AS T [<--- here ]
GROUP BY T.position
ORDER BY T.position) AS result1,
(SELECT T.id AS id, T.symbol, T.t_type, T.degree, T.position, T.shares, T.price, T.completed, T.t_date,
DATEDIFF(CURRENT_DATE, T.t_date) AS days_past,
IFNULL(SUM(S.shares), 0) AS subtrans_shares,
T.shares - IFNULL(SUM(S.shares),0) AS due_shares,
(SELECT IFNULL(SUM(IF(SO.t_type = 'sell', -SO.shares, SO.shares )), 0)
FROM subtransactions AS SO WHERE SO.symbol = T.symbol) AS owned_shares
FROM transactions AS T [<--- here ]
Notice the [<---- here ] marks I added to your code.
The first T is not in any way related to the second T. They have the same correlation alias, they refer to the same table, but they're entirely independent selects and results.
So what you're doing in the first, uncorrelated, subquery is getting the max id for all positions in transactions.
And then you're joining all transaction.position.max(id)s to result2 (which result2 happens to be a join of all transaction.positions to subtransactions). (And the internal order by is pointless and costly, too, but that's not the main problem.)
You're joining every transaction.position.max(id) to every (whatever result 2 selects).
On Edit, after getting home: Ok, you're not Cartesianing, the "where MAXid = id" does join result1 to result2. But you're still rolling up all rows of transaction in both queries.
So you're getting a Cartesian join -- every result1 joined to every result2, unconditionally (nothing tells the database, for example, that they ought to be joined by (max) id or by position).
So if you have ten unique position.max(id)s in transaction, you're getting 100 rows. 1000 unique positions, a million rows. Etc.
When you want to write a complicated query like this, it's a lot easier if you compose it out of simpler views. in particular, you can test each view on its own, to make sure you're getting reasonable results, and then just join the views.
I would split the query into smaller chunks, probably using a stored proc. For example get the max ids from transaction and put this in a table variable. Then join this with subtransactions. This will make it easier for you and the compiler to work out what is going on.
Also without knowing what indexes are on your table it is hard to offer more advice
Put a benchmark function in the code. Then time each section of the code to determine where the slow down is happening. Often times the slow down happens in a different query than you first guess. Determine the correct query that needs to be optimized before posting to stackoverflow.