query optimization for mysql - mysql
I have the following query which takes about 28 seconds on my machine. I would like to optimize it and know if there is any way to make it faster by creating some indexes.
select rr1.person_id as person_id, rr1.t1_value, rr2.t0_value
from (select r1.person_id, avg(r1.avg_normalized_value1) as t1_value
from (select ma1.person_id, mn1.store_name, avg(mn1.normalized_value) as avg_normalized_value1
from matrix_report1 ma1, matrix_normalized_notes mn1
where ma1.final_value = 1
and (mn1.normalized_value != 0.2
and mn1.normalized_value != 0.0 )
and ma1.user_id = mn1.user_id
and ma1.request_id = mn1.request_id
and ma1.request_id = 4 group by ma1.person_id, mn1.store_name) r1
group by r1.person_id) rr1
,(select r2.person_id, avg(r2.avg_normalized_value) as t0_value
from (select ma.person_id, mn.store_name, avg(mn.normalized_value) as avg_normalized_value
from matrix_report1 ma, matrix_normalized_notes mn
where ma.final_value = 0 and (mn.normalized_value != 0.2 and mn.normalized_value != 0.0 )
and ma.user_id = mn.user_id
and ma.request_id = mn.request_id
and ma.request_id = 4
group by ma.person_id, mn.store_name) r2
group by r2.person_id) rr2
where rr1.person_id = rr2.person_id
Basically, it aggregates data depending on the request_id and final_value (0 or 1). Is there a way to simplify it for optimization? And it would be nice to know which columns should be indexed. I created an index on user_id and request_id, but it doesn't help much.
There are about 4907424 rows on matrix_report1 and 335740 rows on matrix_normalized_notes table. These tables will grow as we have more requests.
First, the others are right about knowing better how to format your samples. Also, trying to explain in plain language what you are trying to do is also a benefit. With sample data and sample result expectations is even better.
However, that said, I think it can be significantly simplified. Your queries are almost completely identical with the exception of the one field of "final_value" = 1 or 0 respectively. Since each query will result in 1 record per "person_id", you can just do the average based on a CASE/WHEN AND remove the rest.
To help optimize the query, your matrix_report1 table should have an index on ( request_id, final_value, user_id ). Your matrix_normalized_notes table should have an index on ( request_id, user_id, store_name, normalized_value ).
Since your outer query is doing the average based on an per stores averages, you do need to keep it nested. The following should help.
SELECT
r1.person_id,
avg(r1.ANV1) as t1_value,
avg(r1.ANV0) as t0_value
from
( select
ma1.person_id,
mn1.store_name,
avg( case when ma1.final_value = 1
then mn1.normalized_value end ) as ANV1,
avg( case when ma1.final_value = 0
then mn1.normalized_value end ) as ANV0
from
matrix_report1 ma1
JOIN matrix_normalized_notes mn1
ON ma1.request_id = mn1.request_id
AND ma1.user_id = mn1.user_id
AND NOT mn1.normalized_value in ( 0.0, 0.2 )
where
ma1.request_id = 4
AND ma1.final_Value in ( 0, 1 )
group by
ma1.person_id,
mn1.store_name) r1
group by
r1.person_id
Notice the inner query is pulling all transactions for the final value as either a zero OR one. But then, the AVG is based on a case/when of the respective value for the normalized value. When the condition is NOT the 1 or 0 respectively, the result is NULL and is thus not considered when the average is computed.
So at this point, it is grouped on a per-person basis already with each store and Avg1 and Avg0 already set. Now, roll these values up directly per person regardless of the store. Again, NULL values should not be considered as part of the average computation. So, if Store "A" doesn't have a value in the Avg1, it should not skew the results. Similarly if Store "B" doesnt have a value in Avg0 result.
Related
SQL to club records in sequence
I have data in MySQL table, my data looks like Key, value A 1 A 2 A 3 A 6 A 7 A 8 A 9 B 1 B 2 and I want to group it based on the continuous sequence. Data is sorted in the table. Key, min, max A 1 3 A 6 9 B 1 2 I tried googling it but could find any solution to it. Can someone please help me with this.
This is way easier with a modern DBMS that support window functions, but you can find the upper bounds by checking that there is no successor. In the same way you can find the lower bounds via absence of a predecessor. By combining the lowest upper bound for each lower bound we get the intervals. select low.keyx, low.valx, min(high.valx) from ( select t1.keyx, t1.valx from t t1 where not exists ( select 1 from t t2 where t1.keyx = t2.keyx and t1.valx = t2.valx + 1 ) ) as low join ( select t3.keyx, t3.valx from t t3 where not exists ( select 1 from t t4 where t3.keyx = t4.keyx and t3.valx = t4.valx - 1 ) ) as high on low.keyx = high.keyx and low.valx <= high.valx group by low.keyx, low.valx; I changed your identifiers since value is a reserved world. Using a window function is way more compact and efficient. If at all possible, consider upgrading to MySQL 8+, it is superior to 5.7 in so many aspects. We can create a group by looking at the difference between valx and an enumeration of the vals, if there is a gap the difference increases. Then, we simply pick min and max for each group: select keyx, min(valx), max(valx) from ( select keyx, valx , valx - row_number() over (partition by keyx order by valx) as grp from t ) as tt group by keyx, grp; Fiddle
Optimizing Parameterized MySQL Queries
I have a query that has a number of parameters which if I run from in MySQLWorkbench takes around a second to run. If I take this query and get rid of the parameters and instead substitute the values into the query then it takes about 22 seconds to run, same as If I convert this query to a parameterized stored procedure and run it (it then takes about 22 seconds). I've enabled profiling on MySQL and I can see a few things there. For example, it shows the number of rows examined and there's an order of difference (20,000 to 400,000) which I assume is the reason for the 20x increase in processing time. The other difference in the profile is that the parameterized query sent from MySQLWorkbench still has the parameters in (e.g. where limit < #lim) while the sproc the values have been set (where limit < 300). I've tried this a number of different ways, I'm using JetBrains's DataGrip (as well as MySQLWorkbench) and that works like MySQLWorkbench (sends through the # parameters), I've tried executing the queries and the sproc from MySQLWorkbench, DataGrip, Java (JDBC) and .Net. I've also tried prepared statements in Java but I can't get anywhere near the performance of sending the 'raw' SQL to MySQL. I feel like I'm missing something obvious here but I don't know what it is. The query is relatively complex, it has a CTE a couple of sub-selects and a couple of joins, but as I said it runs quickly straight from MySQL. My main question is why the query is 20x faster in one format than another. Does the way the query is sent to MySQL have anything to do with this (the '#' values sent through and can I replicate this in a stored procedure? Updated 1st Jan Thanks for the comments, I didn't post the query originally as I'm more interested in the general concepts around the use of variables/parameters and how I could take advantage of that (or not) Here is the original query: with tmp_bat as (select bd.MatchId, bd.matchtype, bd.playerid, bd.teamid, bd.opponentsid, bd.inningsnumber, bd.dismissal, bd.dismissaltype, bd.bowlerid, bd.fielderid, bd.score, bd.position, bd.notout, bd.balls, bd.minutes, bd.fours, bd.sixes, bd.hundred, bd.fifty, bd.duck, bd.captain, bd.wicketkeeper, m.hometeamid, m.awayteamid, m.matchdesignator, m.matchtitle, m.location, m.tossteamid, m.resultstring, m.whowonid, m.howmuch, m.victorytype, m.duration, m.ballsperover, m.daynight, m.LocationId from (select * from battingdetails where matchid in (select id from matches where id in (select matchid from battingdetails) and matchtype = #match_type )) as bd join matches m on m.id = bd.matchid join extramatchdetails emd1 on emd1.MatchId = m.Id and emd1.TeamId = bd.TeamId join extramatchdetails emd2 on emd2.MatchId = m.Id and emd2.TeamId = bd.TeamId ) select players.fullname name, teams.teams team, '' opponents, players.sortnamepart, innings.matches, innings.innings, innings.notouts, innings.runs, HS.score highestscore, HS.NotOut, CAST(TRUNCATE(innings.runs / (CAST((Innings.Innings - innings.notOuts) AS DECIMAL)), 2) AS DECIMAL(7, 2)) 'Avg', innings.hundreds, innings.fifties, innings.ducks, innings.fours, innings.sixes, innings.balls, CONCAT(grounds.CountryName, ' - ', grounds.KnownAs) Ground, '' Year, '' CountryName from (select count(case when inningsnumber = 1 then 1 end) matches, count(case when dismissaltype != 11 and dismissaltype != 14 then 1 end) innings, LocationId, playerid, MatchType, SUM(score) runs, SUM(notout) notouts, SUM(hundred) Hundreds, SUM(fifty) Fifties, SUM(duck) Ducks, SUM(fours) Fours, SUM(sixes) Sixes, SUM(balls) Balls from tmp_bat group by MatchType, playerid, LocationId) as innings JOIN players ON players.id = innings.playerid join grounds on Grounds.GroundId = LocationId and grounds.MatchType = innings.MatchType join (select pt.playerid, t.matchtype, GROUP_CONCAT(t.name SEPARATOR ', ') as teams from playersteams pt join teams t on pt.teamid = t.id group by pt.playerid, t.matchtype) as teams on teams.playerid = innings.playerid and teams.matchtype = innings.MatchType JOIN (SELECT playerid, LocationId, MAX(Score) Score, MAX(NotOut) NotOut FROM (SELECT battingdetails.playerid, battingdetails.score, battingdetails.notout, battingdetails.LocationId FROM tmp_bat as battingdetails JOIN (SELECT battingdetails.playerid, battingdetails.LocationId, MAX(battingdetails.Score) AS score FROM tmp_bat as battingdetails GROUP BY battingdetails.playerid, battingdetails.LocationId, battingdetails.playerid) AS maxscore ON battingdetails.score = maxscore.score AND battingdetails.playerid = maxscore.playerid AND battingdetails.LocationId = maxscore.LocationId ) AS internal GROUP BY internal.playerid, internal.LocationId) AS HS ON HS.playerid = innings.playerid and hs.LocationId = innings.LocationId where innings.runs >= #runs_limit order by runs desc, KnownAs, SortNamePart limit 0, 300; Wherever you see '#match_type' then I substitute that for a value ('t'). This query takes ~1.1 secs to run. The query with the hard coded values rather than the variables down to ~3.5 secs (see the other note below). The EXPLAIN for this query gives this: 1,PRIMARY,<derived7>,,ALL,,,,,219291,100,Using temporary; Using filesort 1,PRIMARY,players,,eq_ref,PRIMARY,PRIMARY,4,teams.playerid,1,100, 1,PRIMARY,<derived2>,,ref,<auto_key3>,<auto_key3>,26,"teams.playerid,teams.matchtype",11,100,Using where 1,PRIMARY,grounds,,ref,GroundId,GroundId,4,innings.LocationId,1,10,Using where 1,PRIMARY,<derived8>,,ref,<auto_key0>,<auto_key0>,8,"teams.playerid,innings.LocationId",169,100, 8,DERIVED,<derived3>,,ALL,,,,,349893,100,Using temporary 8,DERIVED,<derived14>,,ref,<auto_key0>,<auto_key0>,13,"battingdetails.PlayerId,battingdetails.LocationId,battingdetails.Score",10,100,Using index 14,DERIVED,<derived3>,,ALL,,,,,349893,100,Using temporary 7,DERIVED,t,,ALL,PRIMARY,,,,3323,100,Using temporary; Using filesort 7,DERIVED,pt,,ref,TeamId,TeamId,4,t.Id,65,100, 2,DERIVED,<derived3>,,ALL,,,,,349893,100,Using temporary 3,DERIVED,matches,,ALL,PRIMARY,,,,114162,10,Using where 3,DERIVED,m,,eq_ref,PRIMARY,PRIMARY,4,matches.Id,1,100, 3,DERIVED,emd1,,ref,"PRIMARY,TeamId",PRIMARY,4,matches.Id,1,100,Using index 3,DERIVED,emd2,,eq_ref,"PRIMARY,TeamId",PRIMARY,8,"matches.Id,emd1.TeamId",1,100,Using index 3,DERIVED,battingdetails,,ref,"TeamId,MatchId,match_team",match_team,8,"emd1.TeamId,matches.Id",15,100, 3,DERIVED,battingdetails,,ref,MatchId,MatchId,4,matches.Id,31,100,Using index; FirstMatch(battingdetails) and the EXPLAIN for the query with the hardcoded values looks like this: 1,PRIMARY,<derived8>,,ALL,,,,,20097,100,Using temporary; Using filesort 1,PRIMARY,players,,eq_ref,PRIMARY,PRIMARY,4,HS.PlayerId,1,100, 1,PRIMARY,grounds,,ref,GroundId,GroundId,4,HS.LocationId,1,100,Using where 1,PRIMARY,<derived2>,,ref,<auto_key0>,<auto_key0>,30,"HS.LocationId,HS.PlayerId,grounds.MatchType",17,100,Using where 1,PRIMARY,<derived7>,,ref,<auto_key0>,<auto_key0>,46,"HS.PlayerId,innings.MatchType",10,100,Using where 8,DERIVED,matches,,ALL,PRIMARY,,,,114162,10,Using where; Using temporary 8,DERIVED,m,,eq_ref,"PRIMARY,LocationId",PRIMARY,4,matches.Id,1,100, 8,DERIVED,emd1,,ref,"PRIMARY,TeamId",PRIMARY,4,matches.Id,1,100,Using index 8,DERIVED,emd2,,eq_ref,"PRIMARY,TeamId",PRIMARY,8,"matches.Id,emd1.TeamId",1,100,Using index 8,DERIVED,<derived14>,,ref,<auto_key2>,<auto_key2>,4,m.LocationId,17,100, 8,DERIVED,battingdetails,,ref,"PlayerId,TeamId,Score,MatchId,match_team",MatchId,8,"matches.Id,maxscore.PlayerId",1,3.56,Using where 8,DERIVED,battingdetails,,ref,MatchId,MatchId,4,matches.Id,31,100,Using index; FirstMatch(battingdetails) 14,DERIVED,matches,,ALL,PRIMARY,,,,114162,10,Using where; Using temporary 14,DERIVED,m,,eq_ref,PRIMARY,PRIMARY,4,matches.Id,1,100, 14,DERIVED,emd1,,ref,"PRIMARY,TeamId",PRIMARY,4,matches.Id,1,100,Using index 14,DERIVED,emd2,,eq_ref,"PRIMARY,TeamId",PRIMARY,8,"matches.Id,emd1.TeamId",1,100,Using index 14,DERIVED,battingdetails,,ref,"TeamId,MatchId,match_team",match_team,8,"emd1.TeamId,matches.Id",15,100, 14,DERIVED,battingdetails,,ref,MatchId,MatchId,4,matches.Id,31,100,Using index; FirstMatch(battingdetails) 7,DERIVED,t,,ALL,PRIMARY,,,,3323,100,Using temporary; Using filesort 7,DERIVED,pt,,ref,TeamId,TeamId,4,t.Id,65,100, 2,DERIVED,matches,,ALL,PRIMARY,,,,114162,10,Using where; Using temporary 2,DERIVED,m,,eq_ref,PRIMARY,PRIMARY,4,matches.Id,1,100, 2,DERIVED,emd1,,ref,"PRIMARY,TeamId",PRIMARY,4,matches.Id,1,100,Using index 2,DERIVED,emd2,,eq_ref,"PRIMARY,TeamId",PRIMARY,8,"matches.Id,emd1.TeamId",1,100,Using index 2,DERIVED,battingdetails,,ref,"TeamId,MatchId,match_team",match_team,8,"emd1.TeamId,matches.Id",15,100, 2,DERIVED,battingdetails,,ref,MatchId,MatchId,4,matches.Id,31,100,Using index; FirstMatch(battingdetails) Pointers as to ways to improve my SQL are always welcome (I'm definitely not a database person), but I''d still like to understand whether I can use the SQL with the variables from code and why that improves the performance by so much Update 2 1st Jan AAArrrggghhh. My machine rebooted overnight and now the queries are generally running much quicker. It's still 1 sec vs 3 secs but the 20 times slowdown does seem to have disappeared
In your WITH construct, are you overthinking your select in ( select in ( select in ))) ... overstating what could just be simplified to the with Innings I have in my solution. Also, you were joining to the extraMatchDetails TWICE, but joined on the same conditions on match and team, but never utliized either of those tables in the "WITH CTE" rendering that component useless, doesn't it? However, the MATCH table has homeTeamID and AwayTeamID which is what I THINK your actual intent was Also, your WITH CTE is pulling many columns not needed or used in subsequent return such as Captain, WicketKeeper. So, I have restructured... pre-query the batting details once up front and summarized, then you should be able to join off that. Hopefully this MIGHT be a better fit, function and performance for your needs. with innings as ( select bd.matchId, bd.matchtype, bd.playerid, m.locationId, count(case when bd.inningsnumber = 1 then 1 end) matches, count(case when bd.dismissaltype in ( 11, 14 ) then 0 else 1 end) innings, SUM(bd.score) runs, SUM(bd.notout) notouts, SUM(bd.hundred) Hundreds, SUM(bd.fifty) Fifties, SUM(bd.duck) Ducks, SUM(bd.fours) Fours, SUM(bd.sixes) Sixes, SUM(bd.balls) Balls from battingDetails bd join Match m on bd.MatchID = m.MatchID where matchtype = #match_type group by bd.matchId, bd.matchType, bd.playerid, m.locationId ) select p.fullname playerFullName, p.sortnamepart, CONCAT(g.CountryName, ' - ', g.KnownAs) Ground, t.team, i.matches, i.innings, i.runs, i.notouts, i.hundreds, i.fifties, i.ducks, i.fours, i.sixes, i.balls, CAST( TRUNCATE( i.runs / (CAST((i.Innings - i.notOuts) AS DECIMAL)), 2) AS DECIMAL(7, 2)) 'Avg', hs.maxScore, hs.maxNotOut, '' opponents, '' Year, '' CountryName from innings i JOIN players p ON i.playerid = p.id join grounds g on i.locationId = g.GroundId and i.matchType = g.matchType join (select pt.playerid, t.matchtype, GROUP_CONCAT(t.name SEPARATOR ', ') team from playersteams pt join teams t on pt.teamid = t.id group by pt.playerid, t.matchtype) as t on i.playerid = t.playerid and i.MatchType = t.matchtype join ( select i2.playerid, i2.locationid, max( i2.score ) maxScore, max( i2.notOut ) maxNotOut from innings i2 group by i2.playerid, i2.LocationId ) HS on i.playerid = HS.playerid AND i.locationid = HS.locationid FROM where i.runs >= #runs_limit order by i.runs desc, g.KnownAs, p.SortNamePart limit 0, 300; Now, I know that you stated that after the server reboot, performance is better, but really, what you DO have appears to really have overbloated queries.
Not sure this is the correct answer but I thought I'd post this in case other people have the same issue. The issue seems to be the use of CTEs in a stored procedure. I have a query that creates a CTE and then uses that CTE 8 times. If I run this query using interpolated variables it takes about 0.8 sec, if I turn it into a stored procedure and use the stored procedure parameters then it takes about to a minute (between 45 and 63 seconds) to run! I've found a couple of ways of fixing this, one is to use multiple temporary tables (8 in this case) as MySQL cannot re-use a temp table in a query. This gets the query time right down but just doesn't fell like a maintainable or scalable solution. The other fix is to leave the variables in place and assign them from the stored procedure parameters, this also has no real performance issues. So my sproc looks like this: create procedure bowling_individual_career_records_by_year_for_team_vs_opponent(IN team_id INT, IN opponents_id INT) begin set #team_id = team_id; set #opponents_id = opponents_id; # use these variables in the SQL below ... end Not sure this is the best solution but it works for me and keeps the structure of the SQL the same as it was previously.
Selecting rows until a column value isn't the same
SELECT product.productID , product.Name , product.date , product.status FROM product INNER JOIN shelf ON product.sheldID=shelf.shelfID WHERE product.weekID = $ID AND product.date < '$day' OR (product.date = '$day' AND shelf.expire <= '$time' ) ORDER BY concat(product.date,shelf.expire) I am trying to stop the SQL statement at a specific value e.g. bad. I have tried using max-date, but am finding it hard as am making the time stamp in the query. (Combining date/time) This example table shows that 3 results should be returned and if the status "bad" was the first result than no results should be returned. (They are ordered by date and time). ProductID Date status 1 2017-03-27 Good 2 2017-03-27 Good 3 2017-03-26 Good 4 2017-03-25 Bad 5 2017-03-25 Good Think I may have fixed it, I added this to my while loop. The query gives the results in order by present to past using date and time, this while loop checks if the column of that row is equal to 'bad' if it is does something (might be able to use an array to fill it up with data). If not than the loop is broken. I know it doesn't seem ideal but it works lol while ($row = mysqli_fetch_assoc($result)) { if ($row['status'] == "bad") { $counter += 1; } else{ break;}
I will provide an answer just with your output as if it was just one table. It will give you the main ideia in how to solve your problem. Basically I created a column called ord that will work as a row_number (MySql doesn't support it yet AFAIK). Then I got the minimum ord value for a bad status then I get everything from the data where ord is less than that. select y.* from (select ProductID, dt, status, #rw:=#rw+1 ord from product, (select #rw:=0) a order by dt desc) y where y.ord < (select min(ord) ord from (select ProductID, status, #rin:=#rin+1 ord from product, (select #rin:=0) a order by dt desc) x where status = 'Bad'); Result will be: ProductID dt status ord ------------------------------------- 1 2017-03-27 Good 1 2 2017-03-27 Good 2 3 2017-03-26 Good 3 Also tested with the use case where the Bad status is the first result, no results will be returned. See it working here: http://sqlfiddle.com/#!9/28dda/1
Mysql Query to retrieve Values using if statement
select IFNULL(sum(invoice0_.INV_AMT),0) as col_0_0_,IFNULL(sum(invoice1_.INV_AMT),0) as col_0_0_ from hrmanager.invoice invoice0_, hrmanager.invoice invoice1_ where invoice0_.FROM_LEDGER=1 **or** invoice1_.TO_LEDGER=1 and ( invoice0_.INV_DATE between '1900-12-20' and '2012-01-30' ) and invoice0_.ACTIVE='Y' and invoice0_.COMP_ID=2 and invoice1_.COMP_ID=2 and invoice0_.INV_TYPE='CLIENT' and invoice1_.INV_TYPE='CLIENT'; Here i wish to select the sum of amount of all from_ledger =1 and next column should display the sum of amount of all to_ledger=1 but here gave and/or in the condition retrieve the same data,In Db here from ledger=1 then the result is 7000 and toledger=1 then it will 0 but the above query retrieve the two columns are same value like 0 or 7000
It looks like your query is a bit of a mess. You are doing a query from the same table twice, but no JOIN condition which will result in a Cartesian result. I THINK what you are looking for is that your table "Invoice" has two columns in it... "To_Ledger" and "From_Ledger", the "INV_AMT", and some other fields for criteria... This should get you closer to your answer. select sum( if( inv.From_Ledger = 1, inv.Inv_Amt, 0 )) as FromInvoiceAmounts, sum( if( inv.To_Ledger = 1, inv.Inv_Amt, 0 )) as ToInvoiceAmounts from hrmanager.invoice inv where 1 in ( inv.From_Ledger, inv.To_Ledger ) AND inv.inv_date between '1900-12-20' and '2012-01-30' and inv.Active = 'Y' and inv.Comp_ID = 2 and inv.Inv_Type = 'CLIENT'
Combine 2 SELECTs into one SELECT in my RAILS application
I have a table called ORDEREXECUTIONS that stores all orders that have been executed. It's a multi currency application hence the table has two columns CURRENCY1_ID and CURRENCY2_ID. To get a list of all orders for a specific currency pair (e.g. EUR/USD) I need to lines to get the totals: v = Orderexecution.where("is_master=1 and currency1_id=? and currency2_id=? and created_at>=?",c1,c2,Time.now()-24.hours).sum("quantity").to_d v+= Orderexecution.where("is_master=1 and currency1_id=? and currency2_id=? and created_at>=?",c2,c1,Time.now()-24.hours).sum("unitprice*quantity").to_d Note that my SUM() formula is different depending on the the sequence of the currencies. e.g. If I want the total ordered quantities of the currency pair USD it then executes (assuming currency ID for USD is 1 and EUR is 2. v = Orderexecution.where("is_master=1 and currency1_id=? and currency2_id=? and created_at>=?",1,2,Time.now()-24.hours).sum("quantity").to_d v+= Orderexecution.where("is_master=1 and currency1_id=? and currency2_id=? and created_at>=?",2,1,Time.now()-24.hours).sum("unitprice*quantity").to_d How do I write this in RoR so that it triggers only one single SQL statement to MySQL?
I guess this would do: v = Orderexecution.where("is_master=1 and ( (currency1_id, currency2_id) = (?,?) or (currency1_id, currency2_id) = (?,?) ) and created_at>=?" ,c1, c2, c2, c1, Time.now()-24.hours ) .sum("CASE WHEN currency1_id=? THEN quantity ELSE unitprice*quantity END" ,c1 ) .to_d
So you could do SELECT SUM(IF(currency1_id = 1 and currency2_id = 2, quantity,0)) as quantity, SUM(IF(currency2_id = 1 and currency1_id = 2, unitprice * quantity,0)) as unitprice _quantity from order_expressions WHERE created_at > ? and (currency1_id = 1 or currency1_id = 2) If you plug that into find_by_sql you should get one object back, with 2 attributes, quantity and unitprice_quantity (they won't show up in the output of inspect in the console but they should be there if you inspect the attributes hash or call the accessor methods directly) But depending on your indexes that might actually be slower because it might not be able to use indexes as efficiently. The seemly redundant condition on currency1_id means that this would be able to use an index on [currency1_id, created_at]. Do benchmark before and after - sometimes 2 fast queries are better than one slow one!