DBT- Nested_Table Merge - jinja2

I want to do a merge operation in DBT.
my target table is nested. I wrote the code in 2 steps.
In the 1st stage, I bring it to the level I want in my target table.
In part 2, I merge it. but since I use expect in the select step, I see 6 joins running in the back.
Actually, there should be one full join from the first temp and one merge needle. What can I use instead of Except?
Do you have any suggested solution?
Query:
{{config(
materialized='incremental',
unique_key='customer_id'
)}}
WITH scd_target_array_tmp1 AS(
select ifnull(r.customer_id,l.customer_id) AS customer_id,
ifnull(R.customerNO,l.customerNO) as customerNO,
ARRAY_AGG(STRUCT(ifnull(l.name,r.name) as name ,ifnull(l.start_date,r.start_date) as start_date,
case when r.customer_id is null then coalesce(l.end_date,current_date()) end as end_date,
coalesce(r.is_current_record,'0')as is_current_record,ifnull(r.table_key,l.table_key) as table_key)) AS authors
from(
select p.customer_id,
p.customerNO, b.name as name,
b.start_date as start_date,
b.end_date as end_date,
b.is_current_record,
b.table_key
from presales-sandbox-346209.jaffle_shop.scd_target_array p, unnest(authors)b
) L
full join(
select customer_id,
customerNO,Name,
current_date() as start_date,
cast(null as DATE) as end_date,
'1'as is_current_record,
to_hex(sha256(to_json_string(struct(customer_id,customerNO,Name)))) as table_key
from presales-sandbox-346209.stripe.scd_table
) R on l.customer_id = R.customer_id and l.table_key=r.table_key
GROUP BY ifnull(r.customer_id,l.customer_id), ifnull(R.customerNO,l.customerNO)
)
SELECT customer_id,
customerNO,
authors
FROM scd_target_array_tmp1 a
{% if is_incremental() %}
WHERE customer_id IN (SELECT customer_id
FROM scd_target_array_tmp1 a , UNNEST(authors)b
WHERE b. table_key in (select b.table_key from scd_target_array_tmp1 , unnest(authors) b
except distinct
select b.table_key from presales-sandbox-346209.jaffle_shop.scd_target_array, unnest(authors) b
)
)
{% endif %}
I used expect in the select step in the last part of the query. What else can be?

Related

How to transpose values in rows to columns in MySQL

This image shows how my raw table looks like:
Following are the conditions to get the transposed table from the image below:
Each row has a unique id
We only need columns for groups A,B,C in the group field and not others.
There could be single or multiple id for group A for the same app id, I need to get those rows for which date is minimum.
There could be single or multiple id for group B and C for the same app id, I need to get those rows for which date is maximum
The image below shows how my final table should look like:
Each row has a unique id
We only need columns for groups A,B,C in the group field and not others.
add this to your query
WHERE `GROUP` IN ('A','B','C')
There could be single or multiple id for group A for the same app id, I need to get those rows for which date is minimum.
add somewhere after the SELECT:
MIN(date) OVER (PARTIITON BY appid)
There could be single or multiple id for group B and C for the same app id, I need to get those rows for which date is maximum
change the added option on point 3 to:
CASE WHEN `group` IN ('B','C')
THEN MAX(date) OVER (PARTIITON BY appid)
ELSE MIN(date) OVER (PARTIITON BY appid)
END
Maybe this helps you to try and take a serious request of solving this yourself (and learn from it) in stead of asking for a solution and then do copy/paste...
BTW: Naming fiels with reserved words, like GROUP and DATE is not a very smart thing to do. A better name for the column GROUP might be CategoryGroup (or whatever this group is referring to)
I took a different approach to this. The SQL is longer but I think it's more auditable.
The main logic point is that I broke A and BC into 2 different subqueries, and used QUALIFY ROW_NUMBER() to choose the correct row, based on either ASC or DESC per your requirements.
I know you are using mysql and this might not work since I don't have an instance to test this one, but here is the SQL I got from building this logic in Rasgo, which I tested on Snowflake and it worked.
-- This splits the data into group A only
WITH CTE_A AS (
SELECT
*
FROM
{{ your_table }}
WHERE
my_group = 'A'
),
-- This splits the data into group B and C only
CTE_B AS (
SELECT
*
FROM
{{ your_table }}
WHERE
my_group IN('B', 'C')
),
-- Selecting from A only, it keeps the most recent row ASCENDING
CTE_A_FIRST AS (
SELECT
*
FROM
CTE_A QUALIFY ROW_NUMBER() OVER (
PARTITION BY APP_ID,
MY_GROUP
ORDER BY
MY_DATE ASC
) = 1
),
-- Selecting from A only, it keeps the most recent row DESCENDING
CTE_B_LAST AS (
SELECT
*
FROM
CTE_B QUALIFY ROW_NUMBER() OVER (
PARTITION BY APP_ID,
MY_GROUP
ORDER BY
MY_DATE DESC
) = 1
),
-- Here we just union A and BC back to one another
CTE_ABC AS (
SELECT
ID,
APP_ID,
MY_DATE,
MY_GROUP,
SCORE1,
SCORE2
FROM
CTE_B_LAST
UNION ALL
SELECT
ID,
APP_ID,
MY_DATE,
MY_GROUP,
SCORE1,
SCORE2
FROM
CTE_B
),
-- We pivot the date horizontally so we get a date for A B C
-- the MIN does not matter, since at this point, we only have 1
CTE_PVT_DATE AS (
SELECT
APP_ID,
B,
C,
A
FROM
(
SELECT
APP_ID,
MY_DATE,
MY_GROUP
FROM
CTE_ABC
) PIVOT (
MIN (MY_DATE) FOR MY_GROUP IN ('B', 'C', 'A')
) as p (APP_ID, B, C, A)
),
-- We pivot the SCORE1 horizontally so we get a date for A B C
-- the MIN does not matter, since at this point, we only have 1
CTE_PVT_SCORE1 AS (
SELECT
APP_ID,
B,
C,
A
FROM
(
SELECT
APP_ID,
SCORE1,
MY_GROUP
FROM
CTE_ABC
) PIVOT (
MIN (SCORE1) FOR MY_GROUP IN ('B', 'C', 'A')
) as p (APP_ID, B, C, A)
),
-- We pivot the SCORE2 horizontally so we get a date for A B C
-- the MIN does not matter, since at this point, we only have 1
CTE_PVT_SCORE2 AS (
SELECT
APP_ID,
B,
C,
A
FROM
(
SELECT
APP_ID,
SCORE2,
MY_GROUP
FROM
CTE_ABC
) PIVOT (
MIN (SCORE2) FOR MY_GROUP IN ('B', 'C', 'A')
) as p (APP_ID, B, C, A)
),
-- We join the subqueries above together on the APP_IDs
CTE_JOINED AS (
SELECT
t0.*,
t1.APP_ID as SCORE1_APP_ID,
t1.B as SCORE1_B,
t1.C as SCORE1_C,
t1.A as SCORE1_A,
t2.APP_ID as SCORE2_APP_ID,
t2.B as SCORE2_B,
t2.C as SCORE2_C,
t2.A as SCORE2_A
FROM
CTE_PVT_DATE t0
INNER JOIN CTE_PVT_SCORE1 t1 ON t0.APP_ID = t1.APP_ID
INNER JOIN CTE_PVT_SCORE2 t2 ON t0.APP_ID = t2.APP_ID
)
-- The final select is really just renaming ...
-- the magic has already happened
SELECT
A AS DATE_A,
B AS DATE_B,
C AS DATE_C,
APP_ID,
SCORE1_B,
SCORE1_C,
SCORE1_A,
SCORE2_B,
SCORE2_C,
SCORE2_A
FROM
CTE_JOINED
I'll roll out my attempt along several steps and then show you the full solution made up of these steps, so that you can understand it piece by piece, given the following definition of your input table:
CREATE TABLE tab(
id INT,
app_id INT,
date VARCHAR(20),
group VARCHAR(20),
score1 INT,
score2 INT
);
STEP 1. Formatting date using a proper DATE format ("YYYY-MM-DD"). For this purpose the function STR_TO_DATE can come in handy.
WITH formatted_tab AS (
SELECT id,
app_id,
STR_TO_DATE(date, '%m/%d/%Y') AS date,
group,
score1,
score2
FROM tab
)
STEP 2. Extracting the useful dates according to the group field. As long as you treat group "A" differently with respect to group "B" and "C" specifically, the idea here is to address each group with a different query, where
in the former case the MIN aggregation function is applied,
in the latter case the MAX aggregation function is applied,
Then the two output result sets are combined with a UNION operation.
(
SELECT app_id,
MIN(date) AS date,
group
FROM formatted_tab
WHERE group IN ('A')
GROUP BY app_id,
group
UNION
SELECT app_id,
MAX(date) AS date,
group
FROM formatted_tab
WHERE group IN ('B', 'C')
GROUP BY app_id,
group
) needed_dates
STEP 3. Getting back scores corresponding to group and date field. This is done with a simple INNER JOIN between the last generated table and the formatted table.
(
SELECT needed_dates.*,
formatted_tab.score1,
formatted_tab.score2
FROM needed_dates
INNER JOIN formatted_tab
ON needed_dates.app_id = formatted_tab.app_id
AND needed_dates.date = formatted_tab.date
AND needed_dates.group = formatted_tab.group
) needed_infos
STEP 4. Pivoting the table exploiting MySQL tools like:
the IF statement to retrieve the values corresponding to a specific group
the MAX aggregation function, to aggregate on the same group
These tools are applied for each group you specified ('A', 'B' and 'C').
SELECT app_id,
MAX(IF(group='A', date , NULL)) AS date_groupA,
MAX(IF(group='B', date , NULL)) AS date_groupB,
MAX(IF(group='C', date , NULL)) AS date_groupC,
MAX(IF(group='A', score1, NULL)) AS score1_groupA,
MAX(IF(group='A', score2, NULL)) AS score2_groupA,
MAX(IF(group='B', score1, NULL)) AS score1_groupB,
MAX(IF(group='B', score2, NULL)) AS score2_groupB,
MAX(IF(group='C', score1, NULL)) AS score1_groupC,
MAX(IF(group='C', score2, NULL)) AS score2_groupC
FROM needed_infos
GROUP BY app_id
Full attempt. This is the combination of the previous snippets. The only difference is the presence of backticks for the field names, that avoid MySQL to misunderstand them with MySQL private keywords like "date" (indicating the DATE type), "group" (use as keyword in the GROUP BY clause) or similar.
WITH `formatted_tab` AS (
SELECT `id`,
`app_id`,
STR_TO_DATE(`date`, '%m/%d/%Y') AS `date`,
`group`,
`score1`,
`score2`
FROM `tab`
)
SELECT `app_id`,
MAX(IF(`group`='A', `date` , NULL)) AS date_groupA,
MAX(IF(`group`='B', `date` , NULL)) AS date_groupB,
MAX(IF(`group`='C', `date` , NULL)) AS date_groupC,
MAX(IF(`group`='A', `score1`, NULL)) AS score1_groupA,
MAX(IF(`group`='A', `score2`, NULL)) AS score2_groupA,
MAX(IF(`group`='B', `score1`, NULL)) AS score1_groupB,
MAX(IF(`group`='B', `score2`, NULL)) AS score2_groupB,
MAX(IF(`group`='C', `score1`, NULL)) AS score1_groupC,
MAX(IF(`group`='C', `score2`, NULL)) AS score2_groupC
FROM ( SELECT needed_dates.*,
formatted_tab.score1,
formatted_tab.score2
FROM ( SELECT `app_id`,
MIN(`date`) AS `date`,
`group`
FROM `formatted_tab`
WHERE `group` IN ('A')
GROUP BY `app_id`,
`group`
UNION
SELECT `app_id`,
MAX(`date`) AS `date`,
`group`
FROM `formatted_tab`
WHERE `group` IN ('B', 'C')
GROUP BY `app_id`,
`group`
) needed_dates
INNER JOIN formatted_tab
ON needed_dates.app_id = formatted_tab.app_id
AND needed_dates.date = formatted_tab.date
AND needed_dates.group = formatted_tab.group
) needed_infos
GROUP BY `app_id`
You'll find a tested SQL Fiddle here.

Minimum number of Meeting Rooms required to Accomodate all Meetings in MySQL

I have the following columns in a table called meetings: meeting_id - int, start_time - time, end_time - time. Assuming that this table has data for one calendar day only, how many minimum number of rooms do I need to accomodate all the meetings. Room size/number of people attending the meetings don't matter.
Here's the solution:
select * from
(select t.start_time,
t.end_time,
count(*) - 1 overlapping_meetings,
count(*) minimum_rooms_required,
group_concat(distinct concat(y.start_time,' to ',t.end_time)
separator ' // ') meeting_details from
(select 1 meeting_id, '08:00' start_time, '09:15' end_time union all
select 2, '13:20', '15:20' union all
select 3, '10:00', '14:00' union all
select 4, '13:55', '16:25' union all
select 5, '14:00', '17:45' union all
select 6, '14:05', '17:45') t left join
(select 1 meeting_id, '08:00' start_time, '09:15' end_time union all
select 2, '13:20', '15:20' union all
select 3, '10:00', '14:00' union all
select 4, '13:55', '16:25' union all
select 5, '14:00', '17:45' union all
select 6, '14:05', '17:45') y
on t.start_time between y.start_time and y.end_time
group by start_time, end_time) z;
My question - is there anything wrong with this answer? Even if there's nothing wrong with this, can someone share a better answer?
Let's say you have a table called 'meeting' like this -
Then You can use this query to get the minimum number of meeting Rooms required to accommodate all Meetings.
select max(minimum_rooms_required)
from (select count(*) minimum_rooms_required
from meetings t
left join meetings y on t.start_time >= y.start_time and t.start_time < y.end_time group by t.id
) z;
This looks clearer and simple and works fine.
Meetings can "overlap". So, GROUP BY start_time, end_time can't figure this out.
Not every algorithm can be done in SQL. Or, at least, it may be grossly inefficient.
I would use a real programming language for the computation, leaving the database for what it is good at -- being a data repository.
Build a array of 1440 (minutes in a day) entries; initialize to 0.
Foreach meeting:
Foreach minute in the meeting (excluding last minute):
increment element in array.
Find the largest element in the array -- the number of rooms needed.
CREATE TABLE [dbo].[Meetings](
[id] [int] NOT NULL,
[Starttime] [time](7) NOT NULL,
[EndTime] [time](7) NOT NULL) ON [PRIMARY] )GO
sample data set:
INSERT INTO Meetings VALUES (1,'8:00','09:00')
INSERT INTO Meetings VALUES (2,'8:00','10:00')
INSERT INTO Meetings VALUES (3,'10:00','11:00')
INSERT INTO Meetings VALUES (4,'11:00','12:00')
INSERT INTO Meetings VALUES (5,'11:00','13:00')
INSERT INTO Meetings VALUES (6,'13:00','14:00')
INSERT INTO Meetings VALUES (7,'13:00','15:00')
To Find Minimum number of rooms required run the below query:
create table #TempMeeting
(
id int,Starttime time,EndTime time,MeetingRoomNo int,Rownumber int
)
insert into #TempMeeting select id, Starttime,EndTime,0 as MeetingRoomNo,ROW_NUMBER()
over (order by starttime asc) as Rownumber from Meetings
declare #RowCounter int
select top 1 #RowCounter=Rownumber from #TempMeeting order by Rownumber
WHILE #RowCounter<=(Select count(*) from #TempMeeting)
BEGIN
update #TempMeeting set MeetingRoomNo=1
where Rownumber=(select top 1 Rownumber from #TempMeeting where
Rownumber>#RowCounter and Starttime>=(select top 1 EndTime from #TempMeeting
where Rownumber=#RowCounter)and MeetingRoomNo=0)set #RowCounter=#RowCounter+1
END
select count(*) from #TempMeeting where MeetingRoomNo=0
Consider a table meetings with columns id, start_time and end_time. Then the following query should give correct answer.
with mod_meetings as (select id, to_timestamp(start_time, 'HH24:MI')::TIME as start_time,
to_timestamp(end_time, 'HH24:MI')::TIME as end_time from meetings)
select CASE when max(a_cnt)>1 then max(a_cnt)+1
when max(a_cnt)=1 and max(b_cnt)=1 then 2 else 1 end as rooms
from
(select count(*) as a_cnt, a.id, count(b.id) as b_cnt from mod_meetings a left join mod_meetings b
on a.start_time>b.start_time and a.start_time<b.end_time group by a.id) join_table;
Sample DATA:
DROP TABLE IF EXISTS meeting;
CREATE TABLE "meeting" (
"meeting_id" INTEGER NOT NULL UNIQUE,
"start_time" TEXT NOT NULL,
"end_time" TEXT NOT NULL,
PRIMARY KEY("meeting_id")
);
INSERT INTO meeting values (1,'08:00','14:00');
INSERT INTO meeting values (2,'09:00','10:30');
INSERT INTO meeting values (3,'11:00','12:00');
INSERT INTO meeting values (4,'12:00','13:00');
INSERT INTO meeting values (5,'10:15','11:00');
INSERT INTO meeting values (6,'12:00','13:00');
INSERT INTO meeting values (7,'10:00','10:30');
INSERT INTO meeting values (8,'11:00','13:00');
INSERT INTO meeting values (9,'11:00','14:00');
INSERT INTO meeting values (10,'12:00','14:00');
INSERT INTO meeting values (11,'10:00','14:00');
INSERT INTO meeting values (12,'12:00','14:00');
INSERT INTO meeting values (13,'10:00','14:00');
INSERT INTO meeting values (14,'13:00','14:00');
Solution:
DROP VIEW IF EXISTS Final;
CREATE VIEW Final AS SELECT time, group_concat(event), sum(num) num from (
select start_time time, 's' event, 1 num from meeting
union all
select end_time time, 'e' event, -1 num from meeting)
group by 1
order by 1;
select max(room) AS Min_Rooms_Required FROM (
select
a.time,
sum(b.num) as room
from
Final a
, Final b
where a.time >= b.time
group by a.time
order by a.time
);
Here's the explanation to gashu's nicely working code (or otherwise a non-code explanation of how to solve it with any language).
Firstly, if the variable 'minimum_rooms_required' would be renamed to 'overlap' it would make the whole thing much easier to understand. Because for each of the start or end times we want to know the numbers of overlapping ongoing meetings. When we found the maximum, this means there's no way of getting around with less than the overlapping amount, because well they overlap.
By the way, I think there might be a mistake in the code. It should check for t.start_time or t.end_time between y.start_time and y.end_time. Counterexample: meeting 1 starts at 8:00, ends at 11:00 and meeting 2 starts at 10:00, ends at 12:00.
(I'd post it as a comment to the gashu's answerbut I don't have enough reputation)
I'd go for Lead() analytic function
select
sum(needs_room_ind) as min_rooms
from (
select
id,
start_time,
end_time,
case when lead(start_time,1) over (order by start_time asc) between start_time
and end_time then 1 else 0 end as needs_room_ind
from
meetings
) a
IMO, I wanna to take the difference between how many meeting are started and ended at the same time when each meeting_id is started (assuming meeting starts and ends on time)
my code was just like this :
with alpha as
(
select a.meeting_id,a.start_time,
count(distinct b.meeting_id) ttl_meeting_start_before,
count(distinct c.meeting_id) ttl_meeting_end_before
from meeting a
left join
(
select meeting_id,start_time from meeting
) b
on a.start_time > b.start_time
left join
(
select meeting_id,end_time from meeting
) c
on a.start_time > c.end_time
group by a.meeting_id,a.start_time
)
select max(ttl_meeting_start_before-ttl_meeting_end_before) max_meeting_room
from alpha

Get a query to list the records that are on and in between the start and the end values of a particular column for the same Id

There is a table with the columns :
USE 'table';
insert into person values
('11','xxx','1976-05-10','p1'),
('11','xxx ','1976-06-11','p1'),
('11','xxx ','1976-07-21','p2'),
('11','xxx ','1976-08-31','p2'),
Can anyone suggest me a query to get the start and the end date of the person with respect to the place he changed chronologically.
The query I wrote
SELECT PId,Name,min(Start_Date) as sdt, max(Start_Date) as edt, place
from **
group by Place;
only gives me the first two rows of my answer. Can anyone suggest the query??
This isn't pretty, and performance might be horrible, but at least it works:
select min(sdt), edt, place
from (
select A.Start_Date sdt, max(B.Start_Date) edt, A.place
from person A
inner join person B on A.place = B.place
and A.Start_Date <= B.Start_Date
left join person C on A.place != C.place
and A.Start_Date < C.Start_Date
and C.Start_Date < B.Start_Date
where C.place is null
group by A.Start_Date, A.place
) X
group by edt, place
The idea is that A and B represent all pairs of rows. C will be any row in between these two which has a different place. So after the C.place is null restriction, we know that A and B belong to the same range, i.e. a group of rows for one place with no other place in between them in chronological order. From all these pairs, we want to identify those with maximal range, those which encompass all others. We do so using two nested group by queries. The inner one will choose the maximal end date for every possible start date, whereas the outer one will choose the minimal start date for every possible end date. The result are maximal ranges of chronologically subsequent rows describing the same place.
This can be achived by:
SELECT Id, PId,
MIN(Start_Date) AS sdt,
MAX(Start_Date) as edt,
IF(`place` <> #var_place_prev, (#var_rank:= #var_rank + 1), #var_rank) AS rank,
(#var_place_prev := `place`) AS `place`
FROM person, (SELECT #var_rank := 0, #var_place_prev := "") dummy
GROUP BY rank, Place;
Example: SQLFiddle
If you want records to be ordered by ID then:
SELECT Id, PId,
MIN(Start_Date) AS sdt,
MAX(Start_Date) as edt,
`place`
FROM(
SELECT Id, PId,
Start_Date
IF(`place` <> #var_place_prev,(#var_rank:= #var_rank + 1),#var_rank) AS rank,
(#var_place_prev := `place`) AS `place`
FROM person, (SELECT #var_rank := 0, #var_place_prev := "") dummy
ORDER BY ID ASC
) a
GROUP BY rank, Place;

Balancing out MYSQL select statements

I inserted 'vanity_name' and 'name' into the first and second SELECT statements respectively.
I get a mismatched number of columns error, which I'm confused about because I added a column to both select statements to maintain a balance.
SQL Statement:
SELECT id,
vanity_name,
Date_format(DATE, '%M %e, %Y') AS DATE,
TYPE
FROM (SELECT resume_id AS id,
date_mod AS DATE,
'resume' AS TYPE
FROM resumes
WHERE user_id = '1'
UNION ALL
SELECT profile_id,
name,
date_mod AS DATE,
'profile'
FROM profiles
WHERE user_id = '1'
ORDER BY DATE DESC
LIMIT
5) AS d1
ORDER BY DATE DESC
Erm, you have four columns in your outer select, three in the inner select.
id, vanity_name, date, type
vs.
id, date, TYPE
Based on the parenthesis, you're trying to union:
(SELECT resume_id AS id, date_mod AS date, 'resume' AS TYPE FROM resumes WHERE user_id = '1'
with
SELECT profile_id,name,date_mod AS date, 'profile' FROM profiles ... LIMIT 5)
and they obviously don't match. Reposition your parens.

mysql self join

I have a table called receiving with 4 columns:
id, date, volume, volume_units
The volume units are always stored as a value of either "Lbs" or "Gals".
I am trying to write an SQL query to get the sum of the volumes in Lbs and Gals for a specific date range. Something along the lines of: (which doesn't work)
SELECT sum(p1.volume) as lbs,
p1.volume_units,
sum(p2.volume) as gals,
p2.volume_units
FROM receiving as p1, receiving as p2
where p1.volume_units = 'Lbs'
and p2.volume_units = 'Gals'
and p1.date between "2012-01-01" and "2012-03-07"
and p2.date between "2012-01-01" and "2012-03-07"
When I run these queries separately the results are way off. I know the join is wrong here, but I don't know what I am doing wrong to fix it.
SELECT SUM(volume) AS total_sum,
volume_units
FROM receiving
WHERE `date` BETWEEN '2012-01-01'
AND '2012-03-07'
GROUP BY volume_units
You can achieve this in one query by using IF(condition,then,else) within the SUM:
SELECT SUM(IF(volume_units="Lbs",volume,0)) as lbs,
SUM(IF(volume_units="Gals",volume,0)) as gals,
FROM receiving
WHERE `date` between "2012-01-01" and "2012-03-07"
This only adds volume if it is of the right unit.
This query will display the totals for each ID.
SELECT s.`id`,
CONCAT(s.TotalLbsVolume, ' ', 'lbs') as TotalLBS,
CONCAT(s.TotalGalVolume, ' ', 'gals') as TotalGAL
FROM
(
SELECT `id`, SUM(`volume`) as TotalLbsVolume
FROM Receiving a INNER JOIN
(
SELECT `id`, SUM(`volume`) as TotalGalVolume
FROM Receiving
WHERE (volume_units = 'Gals') AND
(`date` between '2012-01-01' and '2012-03-07')
GROUP BY `id`
) b ON a.`id` = b.`id`
WHERE (volume_units = 'Lbs') AND
(`date` between '2012-01-01' and '2012-03-07')
GROUP BY `id`
) s
this is a cross join with no visible condition on the join, i don't think you meant that
if you want to sum quantities you don't need to join at all, just group as zerkms did
You can simply group by date and volume_units without self-join.
SELECT date, volume_units, sum(volume) sum_vol
FROM receving
WHERE date between "2012-01-01" and "2012-03-07"
GROUP BY date, volume_units
Sample test:
select d, vol_units, sum(vol) sum_vol
from
(
select 1 id, '2012-03-07' d, 1 vol, 'lbs' vol_units
union
select 2 id, '2012-03-07' d, 2 vol, 'Gals' vol_units
union
select 3 id, '2012-03-08' d, 1 vol, 'lbs' vol_units
union
select 4 id, '2012-03-08' d, 2 vol, 'Gals' vol_units
union
select 5 id, '2012-03-07' d, 10 vol, 'lbs' vol_units
) t
group by d, vol_units