Mysql Finding Aggregate Between Two Other Values - mysql

I'm working with a pretty nasty table schema which unfortunately I can't change as it's defined by our SCADA program. There's one analog float value (power usage), and one digital int value (machine setting). I need to be able to find the Min, Max, and Avg of the power usage for each machine setting.
So basically each time a new machine setting (intvalue) is recorded, I need the aggregate power usage (floatvalue) until the next machine setting. I'd like to be able to group by intvalue as well, so I could get these aggregate numbers for a whole month, for example.
So far, I've tried playing around with joins and nested queries, but I can't get anything to work. I can't really find any examples like this either, since its such a poor table design.
Table schema found here: http://www.sqlfiddle.com/#!9/29164/1
Data:
| tagid | intvalue | floatvalue | t_stamp |
|-------|----------|------------|----------------------|
| 2 | 9 | (null) | 2019-07-01T00:01:58Z |
| 1 | (null) | 120.2 | 2019-07-01T00:02:00Z |
| 1 | (null) | 120.1 | 2019-07-01T00:02:31Z |
| 2 | 11 | (null) | 2019-07-01T00:07:58Z |
| 1 | (null) | 155.9 | 2019-07-01T00:08:00Z |
| 1 | (null) | 175.5 | 2019-07-01T00:10:12Z |
| 1 | (null) | 185.5 | 2019-07-01T00:10:58Z |
| 2 | 2 | (null) | 2019-07-01T00:11:22Z |
| 1 | (null) | 10.1 | 2019-07-01T00:11:22Z |
| 1 | (null) | 12 | 2019-07-01T00:13:58Z |
| 1 | (null) | 9.9 | 2019-07-01T00:14:21Z |
| 2 | 9 | (null) | 2019-07-01T00:15:38Z |
| 1 | (null) | 120.9 | 2019-07-01T00:15:39Z |
| 1 | (null) | 119.2 | 2019-07-01T00:16:22Z |
Desired output:
| intvalue | min | avg | max |
|----------|-------|-------|-------|
| 2 | 9.9 | 10.7 | 12 |
| 9 | 119.2 | 120.1 | 120.9 |
| 11 | 155.9 | 172.3 | 185.5 |
Is this possible?

You can fill the missing intvalues with a subquery in the SELECT clause:
select t.*, (
select t1.intvalue
from sqlt_data_1_2019_07 t1
where t1.t_stamp <= t.t_stamp
and t1.intvalue is not null
order by t1.t_stamp desc
limit 1
) as group_int
from sqlt_data_1_2019_07 t
order by t.t_stamp;
The result will be
| tagid | intvalue | floatvalue | t_stamp | group_int |
| ----- | -------- | ---------- | ------------------- | --------- |
| 2 | 9 | | 2019-07-01 00:01:58 | 9 |
| 1 | | 120.2 | 2019-07-01 00:02:00 | 9 |
| 1 | | 120.1 | 2019-07-01 00:02:31 | 9 |
| 2 | 11 | | 2019-07-01 00:07:58 | 11 |
| 1 | | 155.9 | 2019-07-01 00:08:00 | 11 |
| 1 | | 175.5 | 2019-07-01 00:10:12 | 11 |
| 1 | | 185.5 | 2019-07-01 00:10:58 | 11 |
| 2 | 2 | | 2019-07-01 00:11:22 | 2 |
| 1 | | 10.1 | 2019-07-01 00:11:22 | 2 |
| 1 | | 12 | 2019-07-01 00:13:58 | 2 |
| 1 | | 9.9 | 2019-07-01 00:14:21 | 2 |
| 2 | 9 | | 2019-07-01 00:15:38 | 9 |
| 1 | | 120.9 | 2019-07-01 00:15:39 | 9 |
| 1 | | 119.2 | 2019-07-01 00:16:22 | 9 |
Now you can simply group by the result of the subquery:
select (
select t1.intvalue
from sqlt_data_1_2019_07 t1
where t1.t_stamp <= t.t_stamp
and t1.intvalue is not null
order by t1.t_stamp desc
limit 1
) as group_int,
min(floatvalue) as min,
avg(floatvalue) as avg,
max(floatvalue) as max
from sqlt_data_1_2019_07 t
group by group_int
order by group_int;
And you get:
| group_int | min | avg | max |
| --------- | ----- | ------------------ | ----- |
| 2 | 9.9 | 10.666666666666666 | 12 |
| 9 | 119.2 | 120.10000000000001 | 120.9 |
| 11 | 155.9 | 172.29999999999998 | 185.5 |
View on DB Fiddle

Related

SQL : For each ID, only display the highest value from another column (can't group by)

I have this table here :
+---------+----------+------------+------------+
| idStep | idProj | dateStart | dateEnd |
+---------+----------+------------+------------+
| 1 | 1 | 2011-07-01 | 2011-09-01 |
| 1 | 2 | 2012-05-01 | 2012-05-10 |
| 1 | 3 | 2011-11-01 | 2012-01-20 |
| 2 | 1 | 2011-09-02 | 2011-11-30 |
| 2 | 2 | 2012-05-11 | 2012-06-01 |
| 2 | 3 | 2012-01-21 | 2012-04-01 |
| 3 | 1 | 2011-12-01 | 2012-07-07 |
| 3 | 2 | 2012-06-02 | 2012-07-01 |
| 3 | 3 | 2012-04-02 | NULL |
| 4 | 1 | 2012-07-08 | NULL |
| 4 | 2 | 2012-07-01 | 2012-07-21 |
| 5 | 2 | 2012-07-22 | 2012-07-23 |
+---------+----------+------------+------------+
I need to find the current step of each project by searching for the highest idStep of each idProject without using Group By, which is where I'm completely stuck. Without GROUP BY I just cannot get there.
Basically, the output should be this :
+---------+----------+------------+------------+
| idStep | idProj | dateStart | dateEnd |
+---------+----------+------------+------------+
| 3 | 3 | 2012-04-02 | NULL |
| 4 | 1 | 2012-07-08 | NULL |
| 5 | 2 | 2012-07-22 | 2012-07-23 |
+---------+----------+------------+------------+
I want to use a Query built like this
SELECT idProj,idStep
FROM table
WHERE idStep = (SELECT max(idStep) FOR EACH idProj)
I know that FOR EACH isn't SQL, I'm only trying to make my desired query structure readable.
You want a correlated subuqery:
SELECT idProj, idStep
FROM table t
WHERE t.idStep = (SELECT max(idStep)
FROM table t2
WHERE t2.idProj = t.idProj
);

Is there a mySQL procedure that can merge duplicate rows of data into one, then allow me to manipulate that data as if it were one row?

I'm trying to come up with a stored procedure that takes multiple rows that are exactly identical, and combines them into one row while summing one column, which can then be run through more stored procedures based on the sum of that one column.
I've tried a GROUP BY statement, but that doesn't actually group the rows together, because if I run the table through another procedure it performs actions as if each row were not combined. Performing a SELECT * FROM mytable query shows that each row was not actually combined into one.
Is there any way to permanently combine multiple rows into one singular row?
To start, I've got a table like this:
+-------+-----+--------+---------+------+-----+-----------+
| RowID | pID | Name | Date | Code | QTY | Purchased |
+-------+-----+--------+---------+------+-----+-----------+
| 1 | 1 | bob | 9/29/20 | 123 | 1 | |
| 2 | 1 | bob | 8/10/20 | 456 | 1 | |
| 3 | 2 | rob | 9/15/20 | 123 | 1 | |
| 4 | 2 | rob | 9/15/20 | 123 | 1 | |
| 5 | 2 | rob | 9/15/20 | 123 | 1 | |
| 6 | 2 | rob | 9/15/20 | 123 | 1 | |
| 7 | 2 | rob | 9/15/20 | 123 | 1 | |
| 8 | 3 | john | 7/12/20 | 987 | 1 | |
| 9 | 3 | john | 7/12/20 | 987 | 1 | |
| 10 | 4 | george | 9/12/20 | 684 | 1 | |
| 11 | 5 | paul | 2/2/20 | 454 | 1 | |
| 12 | 6 | amy | 1/12/20 | 252 | 1 | |
| 13 | 7 | susan | 5/30/20 | 131 | 1 | |
| 14 | 7 | susan | 6/6/20 | 252 | 1 | |
| 15 | 7 | susan | 5/30/20 | 131 | 1 | |
+-------+-----+--------+---------+------+-----+-----------+
By the end, i'd like to have a table like this:
+-------+-----+--------+---------+------+-----+-----------+
| RowID | pID | Name | Date | Code | QTY | Purchased |
+-------+-----+--------+---------+------+-----+-----------+
| 1 | 1 | bob | 9/29/20 | 123 | 1 | |
| 2 | 1 | bob | 8/10/20 | 456 | 1 | |
| 3 | 2 | rob | 9/15/20 | 123 | 5 | |
| 4 | 3 | john | 7/12/20 | 987 | 2 | |
| 5 | 4 | george | 9/12/20 | 684 | 1 | |
| 6 | 5 | paul | 2/2/20 | 454 | 1 | |
| 7 | 6 | amy | 1/12/20 | 252 | 1 | |
| 8 | 7 | susan | 5/30/20 | 131 | 2 | |
| 9 | 7 | susan | 6/6/20 | 252 | 1 | |
+-------+-----+--------+---------+------+-----+-----------+
Where exactly identical rows are combined into one row, and the QTY field is summed, that I can then add purchases to, or make deductions from the quantity as a total. Using GROUP BY statements can achieve this, but when I go to alter the quantity or add purchases to each person, it treats it like the first table, as if nothing was actually grouped.
So you have this table:
| RowID | pID | Name | Date | Code | QTY | Purchased |
+-------+-----+--------+---------+------+-----+-----------+
| 1 | 1 | bob | 9/29/20 | 123 | 1 | |
| 2 | 1 | bob | 8/10/20 | 456 | 1 | |
| 3 | 2 | rob | 9/15/20 | 123 | 1 | |
| 4 | 2 | rob | 9/15/20 | 123 | 1 | |
| 5 | 2 | rob | 9/15/20 | 123 | 1 | |
| 6 | 2 | rob | 9/15/20 | 123 | 1 | |
| 7 | 2 | rob | 9/15/20 | 123 | 1 | |
| 8 | 3 | john | 7/12/20 | 987 | 1 | |
| 9 | 3 | john | 7/12/20 | 987 | 1 | |
| 10 | 4 | george | 9/12/20 | 684 | 1 | |
| 11 | 5 | paul | 2/2/20 | 454 | 1 | |
| 12 | 6 | amy | 1/12/20 | 252 | 1 | |
| 13 | 7 | susan | 5/30/20 | 131 | 1 | |
| 14 | 7 | susan | 6/6/20 | 252 | 1 | |
| 15 | 7 | susan | 5/30/20 | 131 | 1 | |
The best way, as has been suggested, is to create a new table with the content of your query, then to rename the old table, and the new table to the original table's name, to check if everything is all right, and to drop the original table if yes.
CREATE TABLE indata_new AS
WITH grp AS (
SELECT
MIN(rowid) AS orowid
, pid
, name
, MAX(date) AS date
, code
, SUM(qty) AS qty
FROM indata
GROUP BY
pid
, name
, code
)
SELECT
ROW_NUMBER() OVER(ORDER BY orowid ASC) AS rowid
, *
FROM grp;
ALTER TABLE indata RENAME TO indata_old;
ALTER TABLE indata_new RENAME TO indata;
-- if "indata" now contains the data you want ...
SELECT * FROM indata;
-- out rowid | orowid | pid | name | date | code | qty
-- out -------+--------+-----+--------+------------+------+-----
-- out 1 | 1 | 1 | bob | 2020-09-29 | 123 | 1
-- out 2 | 2 | 1 | bob | 2020-08-10 | 456 | 1
-- out 3 | 3 | 2 | rob | 2020-09-15 | 123 | 5
-- out 4 | 8 | 3 | john | 2020-07-12 | 987 | 2
-- out 5 | 10 | 4 | george | 2020-09-12 | 684 | 1
-- out 6 | 11 | 5 | paul | 2020-02-02 | 454 | 1
-- out 7 | 12 | 6 | amy | 2020-01-12 | 252 | 1
-- out 8 | 13 | 7 | susan | 2020-05-30 | 131 | 2
-- out 9 | 14 | 7 | susan | 2020-06-06 | 252 | 1
-- you can ...
DROP TABLE indata_old;

Grouping, yet concatenating values from within the group in MySQL

I have a simple MySQL table as such:
| CUST_ID | VISIT | PROD_ID |
|---------|-------|---------|
| 1 | 1 | 3473 |
| 1 | 2 | 324 |
| 1 | 2 | 324 |
| 2 | 1 | 426 |
| 2 | 2 | 4418 |
| 3 | 1 | 4523 |
| 4 | 1 | 976 |
| 4 | 1 | 86 |
| 4 | 2 | 3140 |
| 4 | 3 | 1013 |
And I would like to transform it to this:
| CUST_ID | VISIT | PROD_IDs |
|---------|-------|----------|
| 1 | 1 | 3473 |
| 1 | 2 | 324, 324 |
| 2 | 1 | 426 |
| 2 | 2 | 4418 |
| 3 | 1 | 4523 |
| 4 | 1 | 976, 86 |
| 4 | 2 | 3140 |
| 4 | 3 | 1013 |
This is kinda an ugly hack, I get it.
I have no idea how to cleanly create such a thing. I've tried a variety of unsuccessful grouping strategies. Even a clue or hint in the right direction would be great. Thanks.
If you're trying to group by cust_id + visit, then you can do that and use a GROUP CONCAT on the PROD_ID field, for example:
SELECT
CUST_ID,
VISIT,
GROUP_CONCAT(PROD_ID) PROD_IDS
FROM
table
GROUP BY
CUST_ID,
VISIT
Reference: GROUP CONCAT

Creating a log having the date of purchase

I need to create a log having the purchase date of an item.
Items can be owned by only one buyer at time. So, for example, if item1 was purchased by buyer2 in 2009 and after by buyer1 in 2015, then between 2009 and 2015 was owned by buyer2.
Here is my table:
+--------+------------+-----------+----------+
| id_doc | date | id_item | id_buyer |
+--------+------------+-----------+----------+
| 11 | 2016-06-07 | 1 | 4 |
| 10 | 2016-06-06 | 1 | 4 |
| 1 | 2015-11-30 | 1 | 1 |
| 9 | 2009-01-01 | 1 | 2 |
| 4 | 2001-01-12 | 1 | 2 |
| 8 | 1996-06-06 | 1 | 2 |
| 3 | 1995-05-29 | 1 | 1 |
| 2 | 1998-05-23 | 2 | 2 |
| 7 | 2014-10-10 | 3 | 2 |
| 6 | 2003-12-12 | 3 | 3 |
| 5 | 1991-01-12 | 3 | 2 |
+--------+------------+-----------+----------+
Here is a kind of table/view I need:
+------------+------------+-----------+----------+--------+
| date_from | date_to | id_item | id_buyer | id_doc |
+------------+------------+-----------+----------+--------+
| 2016-06-07 | - | 1 | 4 | 11 |
| 2016-06-06 | 2016-06-07 | 1 | 4 | 10 |
| 2015-11-30 | 2016-06-06 | 1 | 1 | 1 |
| 2009-01-01 | 2015-11-30 | 1 | 2 | 9 |
| 2001-01-12 | 2009-01-01 | 1 | 2 | 4 |
| 1996-06-06 | 2001-01-12 | 1 | 2 | 8 |
| 1995-05-29 | 1996-06-06 | 1 | 1 | 3 |
| 1998-05-23 | - | 2 | 2 | 2 |
| 2014-10-10 | - | 3 | 2 | 7 |
| 2003-12-12 | 2014-10-10 | 3 | 3 | 6 |
| 1991-01-12 | 2003-12-12 | 3 | 2 | 5 |
+------------+------------+-----------+----------+--------+
I've tried a lot with GROUP BY, GROUP_CONCAT, trying to access next record date, etc ... but I can't found out how to solve the problem.
Thanks in advance.
I finally found out the solution only for past purchases.
SELECT
main.id_doc, main.id_item, main.date AS "date_from", bi.date AS "date_to", main.id_buyer
FROM
MyTable main, MyTable bi
WHERE
bi.id_doc =
(
SELECT sub.id_doc
FROM MyTable sub
WHERE sub.id_item = main.id_item AND sub.date > main.date ORDER BY sub.date ASC LIMIT 1
);

MySQL SUM() returning spurious (?) values

Despite spending an hour on this, the solution is eluding me still. I have a complex-ish query that is returning incorrect data for the SUM(). Yet, when I strip it down to the barest form, it outputs the correct data. But why and fix, I cannot figure out.
The Problem
SELECT po.*, SUM( poo.material_qty ) AS total_items_ordered, suppliers.supplier_name
FROM `purchase_orders` po
LEFT JOIN purchase_orders_items poo ON poo.poid = po.poid
LEFT JOIN suppliers ON suppliers.supplier_id = po.supplier_id
LEFT JOIN materials_batch mb ON mb.purchase_order_no = po.poid
WHERE po_status NOT
IN (
'Fulfilled', 'Cancelled'
)
AND batch_status NOT
IN (
'Arrived', 'Cancelled', 'Refused', 'Missing', 'Damaged', 'Completed'
)
GROUP BY po.poid
ORDER BY date_expected ASC
Provides wildly incorrect data for 'total_items_ordered'.
+-------+---------------------+---------------------+-------------+--------+-------------+--------+-----------+---------+----------+--------+----+--------+-----------+---------------------+-----------------------+
| poid | date_raised | date_expected | supplier_id | job_id | job_item_id | ref_no | sub_total | VAT | total | userid | DN | manual | po_status | total_items_ordered | supplier_name |
+-------+---------------------+---------------------+-------------+--------+-------------+--------+-----------+---------+----------+--------+----+--------+-----------+---------------------+-----------------------+
| 15571 | 2014-06-24 13:32:55 | 2014-06-25 00:00:00 | 1 | 0 | 0 | | 14850.10 | 2970.02 | 17820.12 | 1 | | N | Raised | 545 | John Parker & Son Ltd |
| 15572 | 2014-06-24 13:33:26 | 2014-06-25 00:00:00 | 1 | 0 | 0 | | 997.80 | 199.56 | 1197.36 | 1 | | N | Raised | 80 | John Parker & Son Ltd |
+-------+---------------------+---------------------+-------------+--------+-------------+--------+-----------+---------+----------+--------+----+--------+-----------+---------------------+-----------------------+
2 rows in set (0.00 sec)
And yet, when I strip all the complexities out of the query and run the raw SUM(), the value is correct:
mysql> SELECT poid, SUM(material_qty) AS total_items_ordered FROM `purchase_orders_items` GROUP BY poid;
+-------+---------------------+
| poid | total_items_ordered |
+-------+---------------------+
| 15571 | 109 |
| 15572 | 20 |
+-------+---------------------+
2 rows in set (0.00 sec)
Can anyone shed any light on where I'm going wrong here?? I've included all the test table content below just in case you can spot something I've missed. Thank you!
Data Example
mysql> SELECT * FROM purchase_orders;
+-------+---------------------+---------------------+-------------+--------+-------------+--------+-----------+---------+----------+--------+----+--------+-----------+
| poid | date_raised | date_expected | supplier_id | job_id | job_item_id | ref_no | sub_total | VAT | total | userid | DN | manual | po_status |
+-------+---------------------+---------------------+-------------+--------+-------------+--------+-----------+---------+----------+--------+----+--------+-----------+
| 15571 | 2014-06-24 13:32:55 | 2014-06-25 00:00:00 | 1 | 0 | 0 | | 14850.10 | 2970.02 | 17820.12 | 1 | | N | Raised |
| 15572 | 2014-06-24 13:33:26 | 2014-06-25 00:00:00 | 1 | 0 | 0 | | 997.80 | 199.56 | 1197.36 | 1 | | N | Raised |
+-------+---------------------+---------------------+-------------+--------+-------------+--------+-----------+---------+----------+--------+----+--------+-----------+
2 rows in set (0.00 sec)
mysql> SELECT * FROM purchase_orders_items;
+--------+-------+-------------+--------------+----------------+--------------+--------------------------------------------------+
| poi_id | poid | material_id | material_qty | material_price | material_sku | material_name |
+--------+-------+-------------+--------------+----------------+--------------+--------------------------------------------------+
| 1 | 15571 | 2 | 3 | 100.00 | PKS275282 | 406x140 White Universal Beam (S355) |
| 2 | 15571 | 5 | 10 | 17.40 | 118-64-44 | Test Item (S275) |
| 3 | 15571 | 8 | 1 | 9984.50 | 113-64-21 | A really really really big universal beam (S355) |
| 4 | 15571 | 9 | 77 | 10.00 | 12345 | A thing |
| 5 | 15571 | 10 | 18 | 201.20 | 12-34-56 | 102x230 Narrow Beam (S355) |
| 6 | 15572 | 2 | 6 | 100.00 | PKS275282 | 406x140 White Universal Beam (S355) |
| 7 | 15572 | 5 | 9 | 17.40 | 118-64-44 | Test Item (S275) |
| 8 | 15572 | 9 | 4 | 10.00 | 12345 | A thing |
| 9 | 15572 | 10 | 1 | 201.20 | 12-34-56 | 102x230 Narrow Beam (S355) |
+--------+-------+-------------+--------------+----------------+--------------+--------------------------------------------------+
9 rows in set (0.00 sec)
mysql> SELECT * FROM suppliers;
+-------------+-----------------------+--------------------+--------------+---------------------+-------------------+-----------------------+--------------------------+---------------------+----------------------+
| supplier_id | supplier_name | supplier_telephone | supplier_fax | supplier_added_date | supplier_added_by | supplier_last_updated | supplier_last_updated_by | supplier_assessed | supplier_approved_by |
+-------------+-----------------------+--------------------+--------------+---------------------+-------------------+-----------------------+--------------------------+---------------------+----------------------+
| 1 | John Parker & Son Ltd | 01227 783333 | 0800 521932 | 2014-05-04 15:57:43 | 1 | 2014-06-05 16:38:23 | 1 | 2014-05-04 15:57:43 | 2 |
| 2 | Superior Glass Ltd. | 01825 764766 | 01825 767699 | 2014-05-04 17:48:38 | 1 | 2014-06-04 20:14:16 | 1 | 2014-05-04 17:48:38 | 3 |
| 3 | DTS Origins Ltd. | 01283 3283029 | 01928 303494 | 2014-05-04 17:51:57 | 1 | 2014-05-04 17:53:08 | 1 | 2014-05-04 17:51:57 | 2 |
+-------------+-----------------------+--------------------+--------------+---------------------+-------------------+-----------------------+--------------------------+---------------------+----------------------+
3 rows in set (0.00 sec)
mysql> SELECT * FROM materials_batch;
+-------------------+-------+---------------------+-------------------+------------------+-----+---------+------------+-------------+-------------+--------------+
| material_batch_id | poiid | rcvd_date | purchase_order_no | delivery_note_no | qty | rcvd_by | dn_scanned | material_id | supplier_id | batch_status |
+-------------------+-------+---------------------+-------------------+------------------+-----+---------+------------+-------------+-------------+--------------+
| 1 | 1 | 0000-00-00 00:00:00 | 15571 | | 3 | 0 | No | 2 | 1 | Ordered |
| 2 | 2 | 0000-00-00 00:00:00 | 15571 | | 10 | 0 | No | 5 | 1 | Ordered |
| 3 | 3 | 0000-00-00 00:00:00 | 15571 | | 1 | 0 | No | 8 | 1 | Ordered |
| 4 | 4 | 0000-00-00 00:00:00 | 15571 | | 77 | 0 | No | 9 | 1 | Ordered |
| 5 | 5 | 0000-00-00 00:00:00 | 15571 | | 18 | 0 | No | 10 | 1 | Ordered |
| 6 | 6 | 0000-00-00 00:00:00 | 15572 | | 6 | 0 | No | 2 | 1 | Ordered |
| 7 | 7 | 0000-00-00 00:00:00 | 15572 | | 9 | 0 | No | 5 | 1 | Ordered |
| 8 | 8 | 0000-00-00 00:00:00 | 15572 | | 4 | 0 | No | 9 | 1 | Ordered |
| 9 | 9 | 0000-00-00 00:00:00 | 15572 | | 1 | 0 | No | 10 | 1 | Ordered |
+-------------------+-------+---------------------+-------------------+------------------+-----+---------+------------+-------------+-------------+--------------+
The reason for the wrong results should be clear when you leave out the GROUP BY from your query. For each table you JOIN, the number of returned rows is multiplied by the number of rows found by the JOIN.
As the materials_batch table contains multiple entries per order, the resulting total_items_ordered is multiplied by 5 for order number 15571, and its multiplied by 4 for order number 15572.
Try the following:
SELECT
po.*,
(
SELECT SUM(poo.material_qty)
FROM purchase_orders_items poo
WHERE poo.poid = po.poid
) AS total_items_ordered,
suppliers.supplier_name
FROM `purchase_orders` po
LEFT JOIN suppliers ON suppliers.supplier_id = po.supplier_id
LEFT JOIN materials_batch mb ON mb.purchase_order_no = po.poid
WHERE po_status NOT
IN (
'Fulfilled', 'Cancelled'
)
AND batch_status NOT
IN (
'Arrived', 'Cancelled', 'Refused', 'Missing', 'Damaged', 'Completed'
)
GROUP BY po.poid
ORDER BY date_expected ASC