i am troubling in following situation :
i am getting some value.let suppose i am getting 1000.
And i have a table as follow.
now when i get value i need a single query that will update all this table.
As example i will run untill all amount will be paid or it taken value becomes zero.
if value is paid status becomes 1.
After updating table looks like this
If in last value will greater than zero. than store it on any of variable.
I Do This
$userval = 1000;
$sql = "
update challan_1 t1,
(
SELECT x.id , SUM(y.bal) balance
FROM (
SELECT *,amount - '$userval' bal FROM challan_1
) x
JOIN
(
SELECT *,paid - amount bal FROM challan_1
) y
ON
y.id <= x.id GROUP BY x.id
)t2 set t1.paid =balance WHERE t1.id = t2.id
";
echo $sql;
But it will Gives following result.
i use this query and it update paid field as per requirement.
SELECT x.id , x.amount , x.amount as paid_amount , SUM(y.bal) as total, x.reciept_no
FROM
(
SELECT *,paid bal FROM challan_1
) x
JOIN
(
SELECT *,amount bal FROM challan_1
)y
ON y.id <= x.id
GROUP BY x.id
HAVING total <= '500'
Related
I am trying to get through a problem where there are multiple accounts of same scheme on same customer id. On a given txn date I want to retrieve the total Sanctioned Limit and total utilized amount from these accounts. Below is the SQL query I have constructed.
SELECT
cust_id,
tran_date,
rollover_date,
next_rollover,
(
SELECT
acc_num as kcc_ac
FROM
dbzsubvention.acc_disb_amt a
WHERE
(a.tran_date <= AB.tran_date)
AND a.sch_code = 'xxx'
AND a.cust_id = AB.cust_id
ORDER BY
a.tran_date desc
LIMIT
1
) KCC_ACC,
(
SELECT
SUM(kcc_prod)
FROM
(
SELECT
prod_limit as kcc_prod,
acc_num,
s.acc_status
FROM
dbzsubvention.acc_disb_amt a
inner join dbzsubvention.acc_rollover_all_sub_status s using (acc_num)
left join dbzsubvention.acc_close_date c using (acc_num)
WHERE
a.cust_id = AB.cust_id
AND a.tran_date <= AB.tran_date
AND (
ac_close > AB.tran_date || ac_close is null
)
AND a.sch_code = 'xxx'
AND s.acc_status = 'R'
AND s.rollover_date <= AB.tran_date
AND (
AB.tran_date < s.next_rollover || s.next_rollover is null
)
GROUP BY
acc_num
order by
a.tran_date
) t
) kcc_prod,
(
SELECT
sum(disb_amt)
FROM
(
SELECT
disb_amt,
acc_num,
tran_date
FROM
(
SELECT
disb_amt,
a.acc_num,
a.tran_date
FROM
dbzsubvention.acc_disb_amt a
inner join dbzsubvention.acc_rollover_all_sub_status s using (acc_num)
left join dbzsubvention.acc_close_date c using (acc_num)
WHERE
a.tran_date <= AB.tran_date
AND (
c.ac_close > AB.tran_date || c.ac_close is null
)
AND a.sch_code = 'xxx'
AND a.cust_id = AB.cust_id
AND s.acc_status = 'R'
AND s.rollover_date <= AB.tran_date
AND (
AB.tran_date < s.next_rollover || s.next_rollover is null
)
GROUP BY
acc_num,
a.tran_date
order by
a.tran_date desc
) t
GROUP BY
acc_num
) tt
) kcc_disb
FROM
dbzsubvention.acc_disb_amt AB
WHERE
AB.cust_id = 'abcdef'
group by
cust_id,
tran_date
order by
tran_date asc;
This query isn't working. Upon research I have found that correlated subquery works only till 1 level down. However I couldn't get a workaround to this problem.
I have tried searching the solution around this problem but couldn't find the desired one. Using the SUM function at the inner query will not give desired results as
In the second subquery that will sum all the values in column before applying the group by clause.
In third subquery the sorting has to be done first then the grouping and finally the sum.
Therefore I am reaching out to the community for help to suggest a workaround to the issue.
You're correct - external column cannot be transferred through the nesting level immediately.
Try this workaround:
SELECT ... -- outer query
( -- correlated subquery nesting level 1
SELECT ...
( -- correlated subquery nesting level 2
SELECT ...
...
WHERE table0_level1.column0_1 ... -- moved value
)
FROM table1
-- move through nesting level making it a source of current level
CROSS JOIN ( SELECT table0.column0 AS column0_1 ) AS table0_level1
) AS ...,
...
FROM table0
...
I trying to get the last 6 months of the min and max of prices in my table and display them as a group by months. My query is not returning the corresponding rows values, such as the date time for when the max price was or min..
I want to select the min & max prices and the date time they both occurred and the rest of the data for that row...
(the reason why i have concat for report_term, as i need to print this with the dataset when displaying results. e.g. February 2018 -> ...., January 2018 -> ...)
SELECT metal_price_id, CONCAT(MONTHNAME(metal_price_datetime), ' ', YEAR(metal_price_datetime)) AS report_term, max(metal_price) as highest_gold_price, metal_price_datetime FROM metal_prices_v2
WHERE metal_id = 1
AND DATEDIFF(NOW(), metal_price_datetime) BETWEEN 0 AND 180
GROUP BY report_term
ORDER BY metal_price_datetime DESC
I have made an example, extract from my DB:
http://sqlfiddle.com/#!9/617bcb2/4/0
My desired result would be to see the min and max prices grouped by month, date of min, date of max.. and all in the last 6 months.
thanks
UPDATE.
The below code works, but it returns back rows from beyond the 180 days specified. I have just checked, and it is because it joining by the price which may be duplicated a number of times during the years.... see: http://sqlfiddle.com/#!9/5f501b/1
You could use twice inner join on the subselect for min and max
select a.metal_price_datetime
, t1.highest_gold_price
, t1.report_term
, t2.lowest_gold_price
,t2.metal_price_datetime
from metal_prices_v2 a
inner join (
SELECT CONCAT(MONTHNAME(metal_price_datetime), ' ', YEAR(metal_price_datetime)) AS report_term
, max(metal_price) as highest_gold_price
from metal_prices_v2
WHERE metal_id = 1
AND DATEDIFF(NOW(), metal_price_datetime) BETWEEN 0 AND 180
GROUP BY report_term
) t1 on t1.highest_gold_price = a.metal_price
inner join (
select a.metal_price_datetime
, t.lowest_gold_price
, t.report_term
from metal_prices_v2 a
inner join (
SELECT CONCAT(MONTHNAME(metal_price_datetime), ' ', YEAR(metal_price_datetime)) AS report_term
, min(metal_price) as lowest_gold_price
from metal_prices_v2
WHERE metal_id = 1
AND DATEDIFF(NOW(), metal_price_datetime) BETWEEN 0 AND 180
GROUP BY report_term
) t on t.lowest_gold_price = a.metal_price
) t2 on t2.report_term = t1.report_term
simplified version of what you should do so you can learn the working process.
You need calculate the min() max() of the periods you need. That is your first brick on this building.
you have tableA, you calculate min() lets call it R1
SELECT group_field, min() as min_value
FROM TableA
GROUP BY group_field
same for max() call it R2
SELECT group_field, max() as max_value
FROM TableA
GROUP BY group_field
Now you need to bring all the data from original fields so you join each result with your original table
We call those T1 and T2:
SELECT tableA.group_field, tableA.value, tableA.date
FROM tableA
JOIN ( ... .. ) as R1
ON tableA.group_field = R1.group_field
AND tableA.value = R1.min_value
SELECT tableA.group_field, tableA.value, tableA.date
FROM tableA
JOIN ( ... .. ) as R2
ON tableA.group_field = R2.group_field
AND tableA.value = R2.max_value
Now we join T1 and T2.
SELECT *
FROM ( .... ) as T1
JOIN ( .... ) as T2
ON t1.group_field = t2.group_field
So the idea is if you can do a brick, you do the next one. Then you also can add filters like last 6 months or something else you need.
In this case the group_field is the CONCAT() value
I have following query:
SELECT x.id , x.amount , x.amount as paid_amount , SUM(y.bal) as total, x.reciept_no
FROM (SELECT *, paid bal
FROM challan_1 ) x
JOIN (SELECT *, amount bal
FROM challan_1 ) y
ON y.id <= x.id
GROUP BY x.id
HAVING total <= '500'
it's working quite fine. And output like
And then I made a new query that is as below
SELECT *, (CASE WHEN 500-sum(amount) >= 0
THEN '0'
ELSE 500-SUM(paid) END) as pending_amt
FROM challan_1
Output is
This query returns me a Pending Amount so I need to combine both queries so how can I combine both queries.
I need this pending amount in first query.
This is My SQL Fiddle
And I need Like This. Where User Have 500 Currency And Have 3 Payment So for that situation Output Should be like this.
Where 100 Is in pending amount and 200 from user value is debited.
I don't completely understand, but here is my take on this. 500 available. There are records where payments are made. Strange enough even beyond 500, so I assume these are would-be expenses/payments if there were more money available. I stop where payments exceed the 500.
SELECT
challan.*,
SUM(addup.amount) as total_amount,
sum(addup.paid) as total_paid,
sum(addup.amount) - sum(addup.paid) as total_pending,
sum(addup.amount) <= sum(addup.paid) as status
FROM challan_1 challan
JOIN challan_1 addup ON addup.id <= challan.id
GROUP BY challan.id
HAVING sum(addup.paid) <= 500
ORDER BY challan.id;
If you want to show further records, i.e. get rid of the HAVING clause, you'll need another formula for the pending amount, for the highest paid amount possible is 500:
SELECT
challan.*,
SUM(addup.amount) as total_amount,
sum(addup.paid) as total_paid,
sum(addup.amount) - least(500, sum(addup.paid)) as total_pending,
sum(addup.amount) <= least(500, sum(addup.paid)) as status
FROM challan_1 challan
JOIN challan_1 addup ON addup.id <= challan.id
GROUP BY challan.id
ORDER BY challan.id;
Add the subquery as another join.
SELECT x.id , x.amount , x.amount as paid_amount , SUM(y.bal) as total, x.reciept_no, p.pending_amt
FROM (SELECT *, paid bal
FROM challan_1 ) x
JOIN (SELECT *, amount bal
FROM challan_1 ) y
ON y.id <= x.id
CROSS JOIN (SELECT CASE WHEN SUM(amount) <= 500
THEN '0'
ELSE 500 - SUM(paid)
END AS pending_amt
FROM challan_1) AS p
GROUP BY x.id
HAVING total <= '500'
I have 100 records from 3 users. I want to show the most recent record from each user. I have the following query:
SELECT *
FROM Mytable
WHERE Dateabc = CURRENT DATE
AND timeabc =
(
SELECT MAX(timeabc)
FROM Mytable
)
It returns the most recent record for everyone, and I need it to return most recent record from every user.
Should the solution support both DB2 and mysql?
SELECT * FROM Mytable as x
WHERE Dateabc = CURRENT_DATE
AND timeabc = (SELECT MAX( timeabc ) FROM Mytable as y where x.user = y.user)
If it's only DB2 more efficient solutions exists:
SELECT * from (
SELECT x.*, row_number() over (partition by user order by timeabc desc) as rn
FROM Mytable as x
)
WHERE rn = 1
I assume somewhere in your table you have a userID...
select userID, max(timeabc) from mytable group by userID
SELECT *
FROM Mytable as a
WHERE Dateabc = CURRENT_DATE
AND timeabc =
(
SELECT MAX( timeabc )
FROM Mytable as b
WHERE a.uId = b.uId
)
I am new to sql and this forum has been my lifeline till now. Thank you for creating and sharing on this great platform.
I am currently working on a large dataset and would appreciate some guidance.
The data table (existing_table) has 4 million rows and it looks like this:
id date sales_a sales_b sales_c sales_d sales_e
Please note that there are multiple rows with the same date.
What I want to do is to add 5 more columns in this table (cumulative_sales_a, cumulative_sales_b, etc.) which will have the cumulative sales figures for a, b, c, etc. till a particular date (this will be grouped by date). I used the following code to do this:
create table new_cumulative
select t.id, t.date, t.sales_a, t.sales_b, t.sales_c, t.sales_d, t.sales_e,
(select sum(x.sales_a) from existing_table x where x.id = t.id and x.date <= t.date) as cumulative_sales_a,
(select sum(x.sales_b) from existing_table x where x.id = t.id and x.date <= t.date) as cumulative_sales_b,
(select sum(x.sales_c) from existing_table x where x.id = t.id and x.date <= t.date) as cumulative_sales_c,
(select sum(x.sales_d) from existing_table x where x.id = t.id and x.date <= t.date) as cumulative_sales_d,
(select sum(x.sales_e) from existing_table x where x.id = t.id and x.date <= t.date) as cumulative_sales_e
from existing_table t
group by t.id, t.date;
I had created an index on the column 'id' before running this query.
Though I got the desired output, this query took almost 11 hours to finish.
I was wondering if I am doing something wrong here and if there is a better (and faster) way of running such queries.
Thank you for your help.
Some queries are expensive by nature and take long time to execute. In this particular case you could avoid having 5 subqueries :
SELECT a.*, b.cumulative_sales_a, b.cumulative_sales_b, ...
FROM
(
select t.id, t.`date`, t.sales_a, t.sales_b, t.sales_c, t.sales_d, t.sales_e
from existing_table t
GROUP BY t.id,t.`date`
)a
LEFT JOIN
(
select x.id, x.date, sum(x.sales_a) as cumulative_sales_a,
sum(x.sales_b) as cumulative_sales_b, ...
FROM existing_table x
GROUP BY x.id, x.`date`
)b ON (b.id = a.id AND b.`date` <=a.`date`)
It's also expensive query, but it should have a better execution plan than your original. Also, I'm not sure if
select t.id, t.`date`, t.sales_a, t.sales_b, t.sales_c, t.sales_d, t.sales_e
from existing_table t
GROUP BY t.id,t.`date`
gives you what you want - for instance, if you have 5 records with the same id and date, it will grab values of other fields (sales_a, sales_b, etc) from any of these 5 records...
you may join all mini-select with sum in one query as
(select sum(x.sales_a) from existing_table x where x.id = t.id and x.date <= t.date) as cumulative_sales_a,
(select sum(x.sales_b) from existing_table x where x.id = t.id and x.date <= t.date) as cumulative_sales_b,
(select sum(x.sales_c) from existing_table x where x.id = t.id and x.date <= t.date) as cumulative_sales_c,
(select sum(x.sales_d) from existing_table x where x.id = t.id and x.date <= t.date) as cumulative_sales_d,
(select sum(x.sales_e) from existing_table x where x.id = t.id and x.date <= t.date) as cumulative_sales_e
in
select sum(..),sum(..),sum(...),sum(..),sum(..)
from existing table x
where x.id=t.id and x.date<=t.date
Looks like an excellent spot for MySQL variables querying. In this case, I would pre-query all the aggregations by your expected "ID" and "Date" to remove the duplicates and have a single entry as a grand total for the one day. Take this result and have it ordered by the ID and date to prepare for the next part joining to the "#sqlvariables" versions.
Now, just process them in order and keep accumulating for each ID until the new ID, then reset the counter back to zero, but keep adding the respective "Sales". After each "record" is processed, set the #lastID to the ID just processed so it can be compared when processing the next row to identify if continuing on same person, or force reset back to zero.
To help optimize and ensure the inner "PreAgg"regate query ensure an index on (ID, Date). Should be SUPER Fast for you.
SELECT
PreAgg.ID,
PreAgg.`Date`,
PreAgg.SalesA,
PreAgg.SalesB,
PreAgg.SalesC,
PreAgg.SalesD,
PreAgg.SalesE,
#CumulativeA := if( #lastID := PreAgg.ID, #CumulativeA, 0 ) + PreAgg.SalesA as CumulativeA,
#CumulativeB := if( #lastID := PreAgg.ID, #CumulativeB, 0 ) + PreAgg.SalesB as CumulativeB,
#CumulativeC := if( #lastID := PreAgg.ID, #CumulativeC, 0 ) + PreAgg.SalesC as CumulativeC,
#CumulativeD := if( #lastID := PreAgg.ID, #CumulativeD, 0 ) + PreAgg.SalesD as CumulativeD,
#CumulativeE := if( #lastID := PreAgg.ID, #CumulativeE, 0 ) + PreAgg.SalesE as CumulativeE,
#lastID := PreAgg.ID as dummyPlaceholder
from
( select
t.id,
t.`date`,
SUM( t.sales_a ) SalesA,
SUM( t.sales_b ) SalesB,
SUM( t.sales_c ) SalesC,
SUM( t.sales_d ) SalesD,
SUM( t.sales_e ) SalesE
from
existing_Table t
group by
t.id,
t.`date`
order by
t.id,
t.`date` ) PreAgg,
( select
#lastID := 0,
#CumulativeA := 0,
#CumulativeB := 0,
#CumulativeC := 0,
#CumulativeD := 0,
#CumulativeE := 0 ) sqlvars