i need to make sum of two columns by id to one field by that has the same id
for example : i want the Balance which is income - expenses = balance
Table Transactions:
=========================================================
| id | idBudget | expenses || income |
=====+========+===============+=====|=====+========+====|
| 1 | 2 | 10 || 0 |
|----+--------+---------------+-----||----+--------+----|
| 2 | 3 | 200 || 0 |
|----+--------+---------------+-----||----+--------+----|
| 3 | 2 | 1 || 100 |
|----+--------+---------------+-----||----+--------+----|
| 4 | 2 | 0 || 1000 |
|----+--------+---------------+-----||----+--------+----|
Table Budget:
=====================================
| idBudget | Balance |
=====+========+===============+=====|
| 2 | 1090 |
|----+--------+---------------+-----|
| 3 | -200 |
|----+--------+---------------+-----|
i tired to use Triggers but i think i don't know how to implement it
CREATE TABLE starting_balance AS (SELECT * FROM budget);
DROP TABLE budget;
CREATE VIEW budget AS (
SELECT
sb.idBudget,
sb.balance + SUM(t.income - t.expenses) AS balance
FROM starting_balance sb
LEFT JOIN transactions t ON (t.idBudget = sb.idBudget)
GROUP BY
sb.idBudget,
sb.balance
);
In other words, use a view instead of a table. You can update the starting balance from time to time (and either delete or flag the transactions you no longer need!), and you could use a materialized view.
Something like this?
INSERT INTO Budget(SELECT idBudget, SUM(income)-SUM(expenses) as Balance
FROM Transactions GROUP BY idBudget )
delimiter //
CREATE TRIGGER balance AFTER INSERT ON Transactions
FOR EACH ROW
BEGIN
INSERT INTO Budget(SELECT NEW.idBudget, SUM(NEW.income)-SUM(NEW.expenses) as Balance
FROM Transactions GROUP BY idBudget )
END;
delimiter ;
Related
After searching a lot on this forum and the web, i have an issue that i cannot solve without your help.
The requirement look simple but not the code :-(
Basically i need to make a report on cumulative sales by product by week.
I have a table with the calendar (including all the weeks) and a view which gives me all the cumulative values by product and sorted by week. What i need the query to do is to give me all the weeks for each products and then add in a column the cumulative values from the view. if this value does not exist, then it should give me the last know record.
Can you help?
Thanks,
The principal is establish all the weeks that a product could have had sales , sum grouping by week, add the missing weeks and use the sum over window function to get a cumulative sum
DROP TABLE IF EXISTS T;
CREATE TABLE T
(PROD INT, DT DATE, AMOUNT INT);
INSERT INTO T VALUES
(1,'2022-01-01', 10),(1,'2022-01-01', 10),(1,'2022-01-20', 10),
(2,'2022-01-10', 10);
WITH CTE AS
(SELECT MIN(YEARWEEK(DT)) MINYW, MAX(YEARWEEK(DT)) MAXYW FROM T),
CTE1 AS
(SELECT DISTINCT YEARWEEK(DTE) YW ,PROD
FROM DATES
JOIN CTE ON YEARWEEK(DTE) BETWEEN MINYW AND MAXYW
CROSS JOIN (SELECT DISTINCT PROD FROM T) C
)
SELECT CTE1.YW,CTE1.PROD
,SUMAMT,
SUM(SUMAMT) OVER(PARTITION BY CTE1.PROD ORDER BY CTE1.YW) CUMSUM
FROM CTE1
LEFT JOIN
(SELECT YEARWEEK(DT) YW,PROD ,SUM(AMOUNT) SUMAMT
FROM T
GROUP BY YEARWEEK(DT),PROD
) S ON S.PROD = CTE1.PROD AND S.YW = CTE1.YW
ORDER BY CTE1.PROD,CTE1.YW
;
+--------+------+--------+--------+
| YW | PROD | SUMAMT | CUMSUM |
+--------+------+--------+--------+
| 202152 | 1 | 20 | 20 |
| 202201 | 1 | NULL | 20 |
| 202202 | 1 | NULL | 20 |
| 202203 | 1 | 10 | 30 |
| 202152 | 2 | NULL | NULL |
| 202201 | 2 | NULL | NULL |
| 202202 | 2 | 10 | 10 |
| 202203 | 2 | NULL | 10 |
+--------+------+--------+--------+
8 rows in set (0.021 sec)
Your calendar date may be slightly different to mine but you should get the general idea.
Hy, i want reduce my table and updating himself (group and sum some columns, and delete rows)
Source table "table_test" :
+----+-----+-------+----------------+
| id | qty | user | isNeedGrouping |
+----+-----+-------+----------------+
| 1 | 2 | userA | 1 | <- row to group + user A
| 2 | 3 | userB | 0 |
| 3 | 5 | userA | 0 |
| 4 | 30 | userA | 1 | <- row to group + user A
| 5 | 8 | userA | 1 | <- row to group + user A
| 6 | 6 | userA | 0 |
+----+-----+-------+----------------+
Wanted table : (Obtained by)
DROP TABLE table_test_grouped;
SET #increment = 2;
CREATE TABLE table_test_grouped
SELECT id, SUM(qty) AS qty, user, isNeedGrouping
FROM table_test
GROUP BY user, IF(isNeedGrouping = 1, isNeedGrouping, #increment := #increment + 1);
SELECT * FROM table_test_grouped;
+----+------+-------+----------------+
| id | qty | user | isNeedGrouping |
+----+------+-------+----------------+
| 1 | 40 | userA | 1 | <- rows grouped + user A
| 3 | 5 | userA | 0 |
| 6 | 6 | userA | 0 |
| 2 | 3 | userB | 0 |
+----+------+-------+----------------+
Problem : i can use another (temporary) table, but i want update initial table, for :
grouping by user and sum qty
replace/merge rows into only one by group
The result must be a reduce of initial table, group by user, and qty summed.
And it's a minimal exemple, and i don't want full replace inital from table_test_grouped, beacause in my case, i have another colum (isNeedGrouping) for decide if y group or not.
For flagged rows "isNeedGrouping", i need grouping.
For this exemple, a way to do is sequentialy to :
CREATE TABLE table_test_grouped SELECT id, SUM(qty) AS qty, user, isNeedGrouping FROM table_test WHERE isNeedGrouping = 1 GROUP BY user ;
DELETE FROM table_test WHERE isNeedGrouping = 1 ;
INSERT INTO table_test SELECT * FROM table_test_grouped ;
Any suggestion for a simpler way?
It is probably simpler to empty and refill the table than to try and update/delete. Also, the aggregation logic can be simplified to avoid the use for a user variable.
You could write this as:
create table table_test_tmp as
select min(id) id, sum(qty) qty, user, isneedgrouping
from table_test tt
group by tt.user, case when tt.isneedgrouping = 0 then tt.id end;
truncate table table_test; -- back it up first!
insert into table_test (id, qty, user, isneedgrouping)
select id, qty, user, isneedgrouping
from table_test_tmp;
drop table table_test_tmp;
I have read a lot of answers here but I couldn't adapt to my needs.
I have this table below where I would like to update the BALANCE column:
balance = old.balance + new.amount
+----+----------------+---------+------------+-------------+---------------------+------------------+--------+----------+---------+
| ID | TRANSACTION_ID | BANK_ID | ACCOUNT_ID | CUSTOMER_ID | CREATED | DESCRIPTION | AMOUNT | CURRENCY | BALANCE |
+----+----------------+---------+------------+-------------+---------------------+------------------+--------+----------+---------+
| 1 | T1 | 2 | 2 | 1 | 2018-04-22 00:00:00 | TRANSACTION TEST | 100.00 | GBP | NULL |
| 2 | T2 | 2 | 2 | 1 | 2018-04-22 00:00:00 | TRANSACTION TEST | 125.00 | GBP | NULL |
| 3 | T3 | 2 | 2 | 1 | 2018-04-22 00:00:00 | TRANSACTION TEST | -73.00 | GBP | NULL |
+----+----------------+---------+------------+-------------+---------------------+------------------+--------+----------+---------+
This is the result I would like is shown below:
I got it executing:
SET #balance:=0;
UPDATE TRANSACTIONS SET BALANCE = (#balance := #balance + AMOUNT) WHERE ID > 0;
There is no way to fire the statement above after a new column inserted?
+----+----------------+---------+------------+-------------+---------------------+------------------+--------+----------+---------+
| ID | TRANSACTION_ID | BANK_ID | ACCOUNT_ID | CUSTOMER_ID | CREATED | DESCRIPTION | AMOUNT | CURRENCY | BALANCE |
+----+----------------+---------+------------+-------------+---------------------+------------------+--------+----------+---------+
| 1 | T1 | 2 | 2 | 1 | 2018-04-22 00:00:00 | TRANSACTION TEST | 100.00 | GBP | 100.00 |
| 2 | T2 | 2 | 2 | 1 | 2018-04-22 00:00:00 | TRANSACTION TEST | 125.00 | GBP | 225.00 |
| 3 | T3 | 2 | 2 | 1 | 2018-04-22 00:00:00 | TRANSACTION TEST | -73.00 | GBP | 152.00 |
+----+----------------+---------+------------+-------------+---------------------+------------------+--------+----------+---------+
I tried using trigger:
DELIMITER $$
CREATE TRIGGER updateBalance AFTER INSERT ON TRANSACTIONS
FOR EACH ROW
BEGIN
SET NEW.BALANCE = BALANCE + NEW.AMOUNT;
END $$
DELIMITER ;
And I got the error:
Error Code: 1362. Updating of NEW row is not allowed in after trigger
I am new in SQL and MySQL and I believe this is a common task for advanced users.
Values that can be calculated from other (materialized) values shouldn't be materialized as this can lead to inconsistencies.
Remove the column all together.
ALTER TABLE transactions
DROP balance;
And create a view instead:
CREATE VIEW transactions_with_balance
AS
SELECT t1.*,
(SELECT sum(t2.amount)
FROM transactions t2
WHERE t2.bank_id = t1.bank_id
AND t2.account_id = t1.account_id
AND t2.id <= t1.id) balance
FROM transactions t1;
db<>fiddle
If you're using MySQL version 8 or higher you can also replace the subquery by the windowed version of sum()
CREATE VIEW transactions_with_balance
AS
SELECT t1.*,
sum(amount) OVER (PARTITION BY t1.bank_id,
t1.account_id
ORDER BY t1.id) balance
FROM transactions t1;
db<>fiddle
The column customer_id also seems misplaced in the table as I suppose there is an account table where the customer that account belongs to is stored in a foreign key to the customer table. So you can get the customer via the accoount_id.
You can do this with a BEFORE INSERT trigger, summing all the transaction amounts for the given CUSTOMER_ID and adding the new AMOUNT value to get the balance:
DELIMITER $$
CREATE TRIGGER updateBalance BEFORE INSERT ON transactions
FOR EACH ROW
BEGIN
SET NEW.BALANCE = NEW.AMOUNT +
COALESCE((SELECT SUM(AMOUNT)
FROM transactions
WHERE CUSTOMER_ID = NEW.CUSTOMER_ID), 0);
END $$
DELIMITER ;
Demo on dbfiddle
Note you may want to further qualify the sum with
AND BANK_ID = NEW.BANK_ID
and/or
AND ACCOUNT_ID = NEW.ACCOUNT_ID
as necessary to distinguish exactly which records to read the previous transactions from.
What I'm trying to do is bucket my customers based on their transaction frequency. I have the date recorded for every time they transact but I can't work out to get the average delta between each date. What I effectively want is a table showing me:
| User | Average Frequency
| 1 | 15
| 2 | 15
| 3 | 35
...
The data I currently have is formatted like this:
| User | Transaction Date
| 1 | 2018-01-01
| 1 | 2018-01-15
| 1 | 2018-02-01
| 2 | 2018-06-01
| 2 | 2018-06-18
| 2 | 2018-07-01
| 3 | 2019-01-01
| 3 | 2019-02-05
...
So basically, each customer will have multiple transactions and I want to understand how to get the delta between each date and then average of the deltas.
I know the datediff function and how it works but I can't work out how to split them transactions up. I also know that the offset function is available in tools like Looker but I don't know the syntax behind it.
Thanks
In MySQL 8+ you can use LAG to get a delayed Transaction Date and then use DATEDIFF to get the difference between two consecutive dates. You can then take the average of those values:
SELECT User, AVG(delta) AS `Average Frequency`
FROM (SELECT User,
DATEDIFF(`Transaction Date`, LAG(`Transaction Date`) OVER (PARTITION BY User ORDER BY `Transaction Date`)) AS delta
FROM transactions) t
GROUP BY User
Output:
User Average Frequency
1 15.5
2 15
3 35
Demo on dbfiddle.com
DROP TABLE IF EXISTS my_table;
CREATE TABLE my_table
(user INT NOT NULL
,transaction_date DATE
,PRIMARY KEY(user,transaction_date)
);
INSERT INTO my_table VALUES
(1,'2018-01-01'),
(1,'2018-01-15'),
(1,'2018-02-01'),
(2,'2018-06-01'),
(2,'2018-06-18'),
(2,'2018-07-01'),
(3,'2019-01-01'),
(3,'2019-02-05');
SELECT user
, AVG(delta) avg_delta
FROM
( SELECT x.*
, DATEDIFF(x.transaction_date,MAX(y.transaction_date)) delta
FROM my_table x
JOIN my_table y
ON y.user = x.user
AND y.transaction_date < x.transaction_date
GROUP
BY x.user
, x.transaction_date
) a
GROUP
BY user;
+------+-----------+
| user | avg_delta |
+------+-----------+
| 1 | 15.5000 |
| 2 | 15.0000 |
| 3 | 35.0000 |
+------+-----------+
I don't know what to say other than use a GROUP BY.
SELECT User, AVG(DATEDIFF(...))
FROM ...
GROUP BY User
Let's say I have a mySQL table whose structure is like this:
mysql> select * from things_with_stuff;
+----+---------+----------+
| id | counter | quantity |
+----+---------+----------+
| 1 | 101 | 1 |
| 2 | 102 | 2 |
| 3 | 103 | 3 |
+----+---------+----------+
My goal is to "expand" the table so I end up with something like:
mysql> select * from stuff;
+----+---------+
| id | counter |
+----+---------+
| 1 | 101 |
| 2 | 102 |
| 3 | 102 |
| 4 | 103 |
| 5 | 103 |
| 6 | 103 |
+----+---------+
And I want to do this "expansion" using only mysql. Note that I end up with a row per quantity and per counter. Any suggestions? A stored procedure is not an option here (I know they offer while loops).
Thanks!
The following will do the trick as long as some_large_table has a length greater than or equal to the largest quantity in things_with_stuff. For example, let some_large_table be a big fact table in a data warehouse.
SELECT #kn:=#kn+1 AS id, counter
FROM (SELECT #kn:=0) k, things_with_stuff ts
INNER JOIN (
SELECT #rn:=#rn+1 AS num
FROM (SELECT #rn:=0) t, some_large_table
) nums ON num <= ts.quantity;
Assuming there is a maximum value for quantity, you could do:
INSERT INTO things SELECT counter FROM things_with_stuff WHERE quantity > 0;
INSERT INTO things SELECT counter FROM things_with_stuff WHERE quantity > 1;
INSERT INTO things SELECT counter FROM things_with_stuff WHERE quantity > 2;
--... and so on until the max.
It's a bit of a hack but it should do the job.
If the ordering is important you could do a clean up afterwards.
I have sometimes in databases a table named num (number) with a single column i, filled with all integers from 1 to 1000000. It's not hard to make such a table and populate it.
Then you could use this if stuff.id is auto incremented:
INSERT INTO stuff
( counter )
SELECT ts.counter
FROM things_with_stuff AS ts
JOIN num
ON num.i <= ts.quantity