Ledgers Table
ledger_id ledger_name dep_id dr_bal cr_bal
1 Purchase 2 NUll NUll
Transaction Table
trans_id trans_date ledger_id ledger_name amount trans_type
1 3/2/2004 1 Purchase A/C 84500 dr
2 3/12/2004 6 Cash A/C 20000 cr
These are my tables, Ledgers and Transactions.
I want to update ledgers.dr_bal from transactions.amount,
based on ledger_id which is the primary key in ledgers table.
Want to copy the values from transactions.amount to dr_bal based on trans_type ='dr'
So far I have tried doing ,
UPDATE ledgers
SET dr_bal =(select sum(If(tbl_transactions.trans_type = 'dr' AND transactions.ledger_id = 1), amount,0) FROM transactions)
where ledgers.ledger_id =1;
But am unable to run the above query, as it throws an error at the Where clause at the end.
Have tried looking into various questions related to updating tables here. But am really stuck.
Try this query!
UPDATE ledgers
LEFT JOIN
(SELECT
SUM(amount) soa, ledger_id
FROM
tbl_transactions
WHERE
tbl_transactions.trans_type = 'dr'
AND tbl_transactions.ledger_id = 1) t ON (ledgers.ledger_id = t.ledger_id)
SET
ledgers.dr_bal = coalesce(t.soa, 0);
If you would like to update all ledgers with the transactions amount, remove the condition of tbl_transactions.ledger_id = 1 and introduce GROUP BY tbl_transactions.ledger_id in the sub-query.
Related
I have two tables:
TABLE A
Unique_id
id
price
1
1
10.50
2
3
14.70
3
1
12.44
TABLE B
Unique_id
Date
Category
Store
Cost
1
2022/03/12
Shoes
A
13.24
2
2022/04/15
Hats
A
15.24
3
2021/11/03
Shoes
B
22.31
4
2000/12/14
Shoes
A
15.33
I need to filter TABLE A on a known id to get the Unique_id and average price to join to Table B.
Using this information I need to know which stores this item was sold in.
I then need to create a results table displaying the stores and the amount of days sales were recorded in the stores - regardless of whether the sales are associated with the id and the average cost.
To put it more simply I can break down the task into 2 separate commands:
SELECT AVG(price)
FROM table_a
WHERE id = 1
GROUP BY unique_id;
SELECT store, COUNT(date), AVG(cost)
FROM table_b
WHERE category = 'Shoes'
GROUP BY store;
The unique_id should inform the join but when I join the tables it messes up my COUNT function and only counts the days in which the id is connected - not the total store sales days.
The results should look something like this:
Store
AVG price
COUNT days
AVG cost
A
10.50.
3
14.60.
B
12.44
1.
22.31.
I wwas hard to grasp, what you wanted, but after some thinking and your clarification, it can be solved as the code shows
CREATE TABLE TableA
(`Unique_id` int, `id` int, `price` DECIMAL(10,2))
;
INSERT INTO TableA
(`Unique_id`, `id`, `price`)
VALUES
(1, 1, 10.50),
(2, 3, 14.70),
(3, 1, 12.44)
;
CREATE TABLE TableB
(`Unique_id` int, `Date` datetime, `Category` varchar(5), `Store` varchar(1), `Cost` DECIMAL(10,2))
;
INSERT INTO TableB
(`Unique_id`, `Date`, `Category`, `Store`, `Cost`)
VALUES
(1, '2022-03-12 01:00:00', 'Shoes', 'A', 13.24),
(2, '2022-04-15 02:00:00', 'Hats', 'A', 15.24),
(3, '2021-11-03 01:00:00', 'Shoes', 'B', 22.31),
(4, '2000-12-14 01:00:00', 'Shoes', 'A', 15.33)
SELECT
B.`Store`
, AVG(A.`price`) price
, (SELECT COUNT(*) FROM TableB WHERE `Store` = B.`Store` ) count_
, (SELECT AVG(
`cost`) FROM TableB WHERE `Store` = B.`Store` ) price
FROM TableA A
JOIN TableB B ON A.`Unique_id` = B.`Unique_id`
WHERE B.`Category` = 'Shoes'
GROUP BY B.`Store`
Store | price | count_ | price
:---- | --------: | -----: | --------:
A | 10.500000 | 3 | 14.603333
B | 12.440000 | 1 | 22.310000
db<>fiddle here
This should be the query you are after. Mainly you simply join the rows using an outer join, because not every table_b row has a match in table_a.
Then, the only hindrance is that you only want to consider shoes in your average price. For this to happen you use conditional aggregation (a CASE expression inside the aggregation function).
select
b.store,
avg(case when b.category = 'Shoes' then a.price end) as avg_shoe_price,
count(b.unique_id) as count_b_rows,
avg(b.cost) as avg_cost
from table_b b
left outer join table_a a on a.unique_id = b.unique_id
group by b.store
order by b.store;
I must admit, it took me ages to understand what you want and where these numbers result from. The main reason for this is that you have WHERE table_a.id = 1 in your query, but this must not be applied to get the result you are showing. Next time please look to it that your description, queries and sample data match.
(And then, I think that names like table_a, table_b and unique_id don't help understanding this. If table_a were called prices instead and table_b costs and unique_id were called cost_id then, I wouldn't have had to wonder how the tables are related (by id? by unique id?) and wouldn't have had to look again and again which table the cost resides in, which table has a price and which table is the outer joined one while looking at the problem, the requested result and while writing my query.)
Hello I have an issue I am working on for a theoretical problem. Assume I have these two tables
Order Table
Entry
Order#
DatePlaced
Type
2001
5
2021-05-03
C
Status Table
Entry
Order#
Status
Date
Deleted
2001
5
S
2021-05-04
0
2002
5
D
2021-05-05
0
So I need to be able to get this
Expected Table
Entry
Order#
DatePlaced
Type
Status
Date
Deleted
2002
5
2021-05-03
C
D
2021-05-05
0
This would be fairly easy if I could just left join the data. The is issue is that the sql in the code is already written like this. The tables are joined based on the entry. Every time a new status occurs for an order# the entry in the Order Table is updated EXCEPT when it is delivered. Do to how dependent the code is I cannot simply update the initial query below. I was wondering if there is a join or way without using SET that I can get the last status based on the order? I was thinking we can check the order and then the entry but I am not sure how to join that with the Current Table (data we get from query)
SELECT * FROM orders or
LEFT JOIN status st ON or.entry = st.entry
WHERE st.deleted = 0;
This results in this
Current Table
Entry
Order#
DatePlaced
Type
Status
Date
Deleted
2001
5
2021-05-03
C
S
2021-05-04
0
Is there a way to JOIN the status table with the Current Table so that the status columns become what I expect?
This will work just fine:
SELECT s.entry, s.order_no, o.date_placed, o.type, s.status, s.date, s.deleted
FROM `orders` o
INNER JOIN `status` s ON (
s.order_no=o.order_no AND s.entry=(SELECT MAX(entry) FROM status WHERE order_no = o.order_no)
)
Live Demo
https://www.db-fiddle.com/f/twz1TT9VH7YNTY1KrpRAjx/3
Does the last status have a higher entry number or higher date created?
Perhaps include MAX(st.Entry) as last_entry in your SELECT clause,
Maybe select your fields explicitly
vs SELECT *
and include a
GROUP BY
after your WHERE clause
and a
HAVING
after your GROUP BY
create table orders (
entry INT,
order_number INT,
date_placed date,
order_type VARCHAR(1) )
create table order_status (
entry INT,
order_number INT,
order_status VARCHAR(1),
date_created date,
deleted INT
);
INSERT INTO orders (entry, order_number, date_placed, order_type) VALUES (2001, 5, '2021-05-03', 'C');
INSERT INTO order_status (entry, order_number, order_status, date_created, deleted)
VALUES
(2001, 5, 'S', '2001-05-04', 0),
(2002, 5, 'D', '2001-05-05', 0);
SELECT os.entry, o.order_number, o.date_placed, o.order_type,
os.order_status, os.date_created, os.deleted,
MAX(os.entry) as last_entry
FROM orders o
LEFT JOIN order_status os
ON o.order_number = os.order_number
GROUP BY o.order_number
HAVING os.entry = last_entry
I have a table with list of customers:
customer
c_id c_name c_email c_role
1 abc1 a1#abc.com Dev
2 abc2 a2#abc.com Dev
3 abc3 a3#abc.com Dev
4 abc4 a4#abc.com Dev
5 abc5 a5#abc.com Dev
6 abc6 a6#abc.com Dev
7 abc7 a7#abc.com Dev
8 abc8 a8#abc.com Dev
9 abc9 a9#abc.com Dev
I query the table in the following way:
select * from customer where c_role = 'Dev' order by c_id limit 2;
So, I get the results with:
c_id c_name c_email c_role
1 abc1 a1#abc.com Dev
2 abc2 a2#abc.com Dev
The business requirements says that if any records are accessed by a set of users for last 3 days, then those should not return in the subsequent query output.
So, if the user runs a query again for the next 3 days:
select * from customer where c_role = 'Dev' order by c_id limit 2;
The result should be:
c_id c_name c_email c_role
3 abc3 a3#abc.com Dev
4 abc4 a4#abc.com Dev
Can anyone help me how to create this kind of rule in MySQL?
Adding a new column in current table is not going to help you.
You will have to create another table where you store all c_ids a user has accessed and the datetime when the query was executed.
CREATE TABLE IF NOT EXISTS `access_record` (
`id` INT(11) NOT NULL AUTO_INCREMENT ,
`c_id` INT(11) NOT NULL , // id of the record which user accessed
`user_id` INT(11) NOT NULL , // id of the user who accessed the record
`accessed_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ,
PRIMARY KEY (`id`)
);
So whenever the user runs the next query you can use this table to know if user has already accessed a record or not and then use those c_ids to exclude them from next result set.
SELECT
c.c_id, c.c_role,c.c_name,c.c_email
FROM
customer AS c
WHERE
c.c_role = 'Dev'
AND c.c_id NOT IN (
SELECT
ar.c_id
FROM
access_record AS ar
WHERE ar.user_id = 1 // ofcourse this will change with each user (current user in your case I assume)
AND ar.accessed_at > DATE_SUB(NOW(), INTERVAL 3 DAY)
)
ORDER BY c.c_id LIMIT 2;
This will give you all records which were not accessed by specific user within last 3 days.
I hope this helps.
Answering #dang's question in comment
How do I populate access_record when a query runs?
When you have fetched all the records then you extract c_ids from those records then you insert those c_ids into the access_record table.
In MYSQL this query should do the trick
INSERT INTO access_record (c_id,user_id)
SELECT
c.c_id, 1 // user_id of the user who is fetching records
FROM
customer AS c
WHERE
c.c_role = 'Dev'
AND c.c_id NOT IN (
SELECT
ar.c_id
FROM
access_record AS ar
WHERE ar.user_id = 1 // ofcourse this will change with each user (current user in your case I assume)
AND ar.accessed_at > DATE_SUB(NOW(), INTERVAL 3 DAY)
)
ORDER BY c.c_id LIMIT 2;
You can also fetch those c_ids with one query then use second query to insert those c_ids into the access_record table.
If you have all your records fetched in $records then
$c_ids = array_column($temp, 'c_id'); // get all c_ids from fetched record array
Now run a query to insert all those c_ids.
I would add an extra table with users and accessdate. And make the business logic update those on access. For example:
user | accessdate | c_id
Your 'customer' table is data about customers. That is all it should have.
However, your selection criteria is really not what it appears to be. What business requirements want you to implement is a feed, or pipeline, with the selection acting as a consumer, being fed by un-accessed customers.
Each user (or group of users, ie: 'set of users') needs it's own feed, but that can be managed by a single table with a distinguishing field. So we need a user_group table to group your 'set of users'.
// user_group
g_id g_data
201 abc1
202 abc2
203 abc3
We will need to populate customer_feed with the access timestamps for each customer. We can add a foreign keys to delete customers and user_groups when they are deleted, but we will need update the customer feed when we use it.
create table customer_feed (
c_id int(11) not null,
g_id int(11) not null,
at timestamp not null,
primary key (c_id, g_id),
constraint customer_fk foreign key (c_id) references customer on delete cascade
constraint user_group_fk foreign key (g_id) references user_group on delete cascade,
);
// customer_feed
c_id g_id at
101 201 2018-11-26 07:40:21
102 201 2018-11-26 07:40:21
103 201 2018-11-26 07:40:22
When we want to read the customer data, we must do three queries:
Update the feed for the current user-group.
Get the users from the feed
Mark the users in the feed as consumed.
So, let's say we are using user_group 201.
When we update the feed, any new users to the feed are available to be read straight away,we give them a timestamp which is very early. So we can commemorate the battle of hastings...
// 1. update the customer_feed for user_group 201
insert into customer_feed(select c.c_id,201,TIMESTAMP('1066-10-14')
from customer c left join customer_feed f
on c.c_id = f.c_id and f.g_id=201 where f.c_id is null);
we select from the customer and the feed... We only accept records whose access dates are less than three days ago. This is the query you originally had, but with the feed restrictions.
// 2. read from feed for user_group 201
select c.* from customer c,customer_feed f where c.c_role='Dev'
and f.g_id=201 and f.c_id=c.c_id
and f.at < date_sub(now(), interval 3 day) limit 2
..and now we need to mark the values from the feed as being consumed. So, we gather the c_id we have selected into a list of c_id, eg: '102,103', and we mark them as consumed.
// 3. Mark the feed as consumed for user_group 201
update customer_feed set at=now() where g_id=201 and c_id in '102,103'
Add a new column in your customer table like start_accessing and then you can run the query:
SELECT *
FROM customer
WHERE c_role = 'Dev'
AND Date_add(start_accessing, INTERVAL 3 day) >= Curdate()
ORDER BY c_id
LIMIT 2;
start_accessing will be the column that will save when the user started accessing the resource.
Add a datetime stamp to the table and query from that.
There might be a way to get a 3 day rotation without having to change the tables.
By calculating batches of devs.
And calculate the current dev batch based on the current date.
The example below is for MySql 7.x (no window functions)
set #date = current_date;
-- set #date = date('2020-07-04');
set #dayslimit = 3;
set #grouplimit = 2;
set #devcnt = (select count(*) cnt
from test_customer
where c_role = 'Dev');
set #curr_rnk = ((floor(datediff(#date, date('2000-01-01')) / #dayslimit)%floor(#devcnt / #dayslimit))+1);
select c_id, c_name, c_email, c_role
-- , rnk
from
(
select t.*,
case when #rn := #rn +1 then #rnk := ceil((#rn/#grouplimit)%(#devcnt+1/#grouplimit)) end as rnk
from test_customer t
cross join (select #rn:=0, #rnk:= null) vars
where c_role = 'Dev'
order by c_id
) q
where rnk = #curr_rnk
order by c_id
limit 2;
A test on rextester here
I am using a MySQL query to fetch data from 2 tables. Here I have a status Transfer Out in table2. I do need to fetch all the details with status Transfer Out and at the same time, there should not be any details with Transfer In status which is added after the Transfer Out. So that I should not get the details which are Transfer back In after a Transfer Out.
Right now I am using subquery for the same. But when the data count gets higher, it is causing timeout issues. Is there a better way to rewrite the query and get the same result?
My query sample is
SELECT sq.etid
FROM (
SELECT og.etid, pt.timestamp
FROM og_membership og
INNER JOIN table1 n ON(n.nid=og.etid)
INNER JOIN table2 pt ON(og.etid=pt.animal_nid)
WHERE og.entity_type='node'
AND pt.partner_gid = :gid
AND pt.shelter_gid = :our_gid
AND pt.type = 'Transfer Out'
AND (
SELECT count(id)
FROM table2
WHERE timestamp > pt.timestamp
AND type = 'Transfer In'
AND partner_gid = :gid
AND shelter_gid = :our_gid
) = 0
) AS sq
You could possibly do this with a group by for example
select something
from somehwere
group by something having isnull(max(out_date),0) > isnull(max(indate) ,0)
For example
DROP TABLE IF EXISTS LOANS;
CREATE TABLE LOANS (ID INT AUTO_INCREMENT PRIMARY KEY, ISBN INT,DIRECTION VARCHAR(3), DT DATETIME);
INSERT INTO LOANS(ISBN,DIRECTION,DT) VALUES
(1,'OUT','2017-10-01 09:00:00'),
(2,'OUT','2017-10-02 10:00:00'),
(2,'IN', '2017-10-02 10:10:00'),
(3,'REC','2017-10-02 10:00:00'),
(4,'REC','2017-10-02 10:00:00'),
(4,'OUT','2017-10-03 10:00:00'),
(4,'IN', '2017-10-04 10:00:00'),
(4,'OUT','2017-10-05 10:00:00')
;
SELECT ISBN
FROM LOANS
WHERE DIRECTION IN('OUT','IN')
GROUP BY ISBN HAVING
MAX(CASE WHEN DIRECTION = 'OUT' THEN DT ELSE 0 END) >
MAX(CASE WHEN DIRECTION = 'IN' THEN DT ELSE 0 END) ;
result
+------+
| ISBN |
+------+
| 1 |
| 4 |
+------+
2 rows in set (0.00 sec)
In case of a tie on DT you could substitute id.
Change
( SELECT COUNT(id) FROM ... ) = 0
to
EXISTS ( SELECT 1 FROM ... )
Get rid of the outermost query and the first pt.timestamp. (Or is there some obscure reason for pt.timestamp?)
table2 needs INDEX(type, partner_gid, shelter_gid, timestamp). timestamp must be last, but the others can be in any order.
table2 needs INDEX(type, partner_gid, shelter_gid, animal_nid); the columns can be in any order. (This index cannot be combined with the previous one.)
og_membership needs INDEX(etid, entity_type) (in either order).
Why do you have INNER JOIN table1 n ON(n.nid=og.etid) in the query? table1 does not seem to be used at all -- except maybe for verifying the existence of a row. Remove it if possible.
After making changes, please provide EXPLAIN SELECT ... and SHOW CREATE TABLE for the 2 (or 3??) tables.
I have a table which has three column - An Id, a foreign key and Status. I need to find the no of active(value = 1) status in the table and the total rows in the table. How to do it in a single query.
Output of this query is Joined With another Table.
select FK, Count(1) as active_count, <missing> as total_count from table where status = 1;
Try
select
FK,
count(1) total_count,
sum(case when status = 1 then 1 else 0 end) active_count
from table
--group by FK