Showing all components in inventory with total stock - mysql

I have a table which shows displays list of all works orders with part numbers and quantities booked in those works orders as stock awaiting polishing.
What I want, is to list all part numbers in order to display total number of stock per part.
The current output is:
=======================================================
**Works Order** | **Part Number** | Stock Awaiting Polishing + other columns
1 | B01 | 5
2 | B012 | 12
3 | B012 | 43
4 | B014 | 32
What I want to Display is:
=======================================================
**Part Number** | Stock Awaiting Polishing
B01 | 5
B012 | 55
BO14 | 32
This may be easy but I'm still learning, can I get some help here?
SELECT
data.WORKS_ORDER1 AS Works_Order,
data.PART_NO1 AS Part_Number,
data.Part_Prim_Desc AS Part_Description,
data.Part_Secd_Desc AS Customer,
data.Qty_Painted,
data.Qty_Processed,
data.Available_Stock AS Stock_Awaiting_Polishing
FROM (
SELECT
wip.WO.WO_No AS [WORKS_ORDER1],
wip.WO.Part_No AS [PART_NO1],
wip.Ops.WO_No AS [WORKS_ORDER2],
production.Part.Part_No AS [PART_NO2],
production.Part.Part_Prim_Desc,
production.Part.Part_Secd_Desc,
wip.WO.Qty_Inc_Scrap AS Qty_Painted,
wip.Ops.Qty_Complete + wip.Ops.Qty_Rejected
+ wip.Ops.Qty_Scrapped AS Qty_Processed,
wip.WO.Qty_Inc_Scrap - (wip.Ops.Qty_Complete
+ wip.Ops.Qty_Rejected + wip.Ops.Qty_Scrapped) AS [Available_Stock]
FROM wip.Ops
INNER JOIN wip.WO ON wip.Ops.WO_No = wip.WO.WO_No
INNER JOIN production.Part ON wip.WO.Part_No = production.Part.Part_No
WHERE wip.WO.WO_Complete = 0 AND wip.WO.No_of_Ops_Completed = 1
AND wip.Ops.Op_No = 20 AND wip.Ops.WC_Code = 'VPO' AND wip.Ops.Completion_Ind_YN = 'N'
GROUP BY wip.WO.WO_No, wip.WO.Part_No,
wip.Ops.WO_No,
production.Part.Part_No,
production.Part.Part_Prim_Desc,
production.Part.Part_Secd_Desc,
wip.WO.Qty_Inc_Scrap,
wip.Ops.Qty_Complete,
wip.Ops.Qty_Rejected,
wip.Ops.Qty_Scrapped
) data
WHERE data.Available_Stock > 0
I'm starting my journey with SQL and the above code most likely could be simplified a lot, but it's all I've got and it works the way I want.
All help appreciated!

Related

Creating a column being the multiple of others

I need some help. I have 2 colluns from mysql query result: 1 with text, and another with decimal values. Like that:
select desc, value from table a
|5,50 % | 2984.59 |
|Subs | 10951.70 |
|Isent | 3973.17 |
|13,30 % | 560.26 |
From the rows that have the %, I want to multiply the values and create a third result column, rounding up to two decimal places. See above
2984,59 * 0,055 = 164,15245
560,26 * 0,133 = 74,514
I need make the sql query that show something like above.
+-------+-----------+-----------+
|5,50 % | 2984,59 | 164,16 |
|Subs | 10951,70 | 0 or NULL |
|Isent | 3973,17 | 0 or NULL |
|13,30% | 560,26 | 74,52 |
+-------+-----------+-----------+
How i can do it?
Thanks so much for help
It would be better to have floaring numbers in the first place, converting costs time
You have commas in your procentage, but mysql needs dots there
If value isn't always a number, you can use the mysql way to add a 0 0 to it, that remioves all non numerical characters
SELECT `desc`, `value`, (REPLACE(`desc`,',','.') + 0) * `value` / 100 FROM val
desc
value
(REPLACE(`desc`,',','.') + 0) * `value` / 100
5,50 %
2985
164.175
Subs
10952
0
Isent
3973
0
13,30 %
560
74.48
fiddle
SELECT `desc`, `value`, CEIL((REPLACE(`desc`,',','.') + 0) * `value`) / 100 FROM val
desc
value
CEIL((REPLACE(`desc`,',','.') + 0) * `value`) / 100
5,50 %
2985
164.18
Subs
10952
0
Isent
3973
0
13,30 %
560
74.48
fiddle

Calculate percentage between two columns in SQL Query as another column per day

I am trying to calculate the daily % split of the No_of_daily_installs.
Question:
Could someone explain how can I add a new column that represents the daily plit % of the No_of_daily_installs per LAT_type as %?
What I have now:
|-----------|-------------------|--------------------|-----------------------|
|Insall_Date|Lat_type |No_of_daily_installs| RunningTotal_Installs |
|*---------*|*-----------------*|*------------------*|*---------------------*|
|2021-06-30 |Ad Tracking Enabled| 613 |21345 |
|2021-06-30 |Limit Ad Tracking | 3723 |74273 |
|2021-06-29 |Limit Ad Tracking | 3553 |70550 |
|2021-06-29 |Ad Tracking Enabled| 480 |20732 |
|2021-06-28 |Limit Ad Tracking | 2869 |66997 |
|2021-06-28 |Ad Tracking Enabled| 375 |20252 |
What I would like to achieve:
|-----------|-------------------|--------------------|--------------|-----------------------
|Insall_Date|Lat_type |No_of_daily_installs|%_of_daily_LAT| RunningTotal_Installs |
|*---------*|*-----------------*|*------------------*|*------------*|*---------------------*|
|2021-06-30 |Ad Tracking Enabled| 613 |0.15 |21345 |
|2021-06-30 |Limit Ad Tracking | 3723 |0.85 |74273 |
|2021-06-29 |Limit Ad Tracking | 3553 |0.80 |70550 |
|2021-06-29 |Ad Tracking Enabled| 480 |0.20 |20732 |
|2021-06-28 |Limit Ad Tracking | 2869 |0.85 |66997 |
|2021-06-28 |Ad Tracking Enabled| 375 |0.15 |20252 |
My code so far:
WITH "Adtracking" (
"Install_Date",
"Lat_type",
"No_of_daily_installs"
) AS (
SELECT
to_date("created_at") date_install,
CASE WHEN "tracking_limited" = '1' THEN
'Limit Ad Tracking'
ELSE
'Ad Tracking Enabled'
END AS "Ad Tracking",
count("tracking_limited") AS "number_of_occurences"
FROM
TEMP_DB.DATA_LAKE.ADJUST_CSV_DATA
WHERE
TRUE
AND "platform" = 'mobile_app'
AND "activity_kind" = 'install'
AND "os_name" = 'ios'
GROUP BY
1,
2
ORDER BY
1,
2
)
SELECT
"Install_Date",
"Lat_type",
"No_of_daily_installs",
SUM("No_of_daily_installs") OVER (PARTITION BY "Lat_type" ORDER BY "Install_Date") AS "RunningTotal_Installs"
FROM
"Adtracking"
WHERE
"Install_Date" ILIKE '2021-06%'
GROUP BY
1,
2,
3
ORDER BY
1,
2
The following divides the daily installs for each Lat type by the total /sum of daily installs for each install date and finally rounds that to 2 decimal places.
ROUND("No_of_daily_installs"/(SUM("No_of_daily_installs") OVER (PARTITION BY "Install_Date")),2) AS "%_of_daily_LAT"
You may try the complete example:
WITH "Adtracking" (
"Install_Date",
"Lat_type",
"No_of_daily_installs"
) AS (
SELECT
to_date("created_at") date_install,
CASE WHEN "tracking_limited" = '1' THEN
'Limit Ad Tracking'
ELSE
'Ad Tracking Enabled'
END AS "Ad Tracking",
count("tracking_limited") AS "number_of_occurences"
FROM
TEMP_DB.DATA_LAKE.ADJUST_CSV_DATA
WHERE
TRUE
AND "platform" = 'mobile_app'
AND "activity_kind" = 'install'
AND "os_name" = 'ios'
GROUP BY
1,
2
ORDER BY
1,
2
)
SELECT
"Install_Date",
"Lat_type",
"No_of_daily_installs",
-- modification begins
ROUND("No_of_daily_installs"/(SUM("No_of_daily_installs") OVER (PARTITION BY "Install_Date")),2) AS "%_of_daily_LAT"
-- modification ends
SUM("No_of_daily_installs") OVER (PARTITION BY "Lat_type" ORDER BY "Install_Date") AS "RunningTotal_Installs"
FROM
"Adtracking"
WHERE
"Install_Date" ILIKE '2021-06%'
GROUP BY
1,
2,
3
ORDER BY
1,
2

MySQL calculating query

I have this table, only two columns, each record stores an interest rate for a given month:
id rate
===========
199502 3.63
199503 2.60
199504 4.26
199505 4.25
... ...
201704 0.79
201705 0.93
201706 0.81
201707 0.80
201708 0.14
Based on this rates, I need to create another table of accumulated rates which similar structure, whose data is calculated as function of a YYYYMM (month/year) parameter, this way (this formula is legally mandatory):
The month given as parameter has always rate of 0 (zero)
The month immediately previous has always a rate of 1 (one)
The previous months' rates will be (one) plus the sum of rates of months between that given month and the month given as parameter.
I'll clarify this rules with this example, given parameter 201708:
SOURCE CALCULATED
id rate id rate
=========== =============
199502 3.63 199502 360.97 (1 + sum(rate(199503) to rate(201707)))
199503 2.60 199503 358.37 (1 + sum(rate(199504) to rate(201707)))
199504 4.26 199504 354.11 (1 + sum(rate(199505) to rate(201707)))
199505 4.25 199505 349.86 (1 + sum(rate(199506) to rate(201707)))
... ... ... ...
201704 0.79 201704 3.54 (1 + rate(201705) + rate(201706) + rate(201707))
201705 0.93 201705 2.61 (1 + rate(201706) + rate(201707))
201706 0.81 201706 1.80 (1 + rate(201707))
201707 0.80 201707 1.00 (per definition)
201708 0.14 201708 0.00 (per definition)
Now I've already implemented a VB.NET function that reads the source table and generates the calculated table, but this is done in runtime at each client machine:
Public Function AccumRates(targetDate As Date) As DataTable
Dim dtTarget = Rates.Clone
Dim targetId = targetDate.ToString("yyyyMM")
Dim targetIdAnt = targetDate.AddMonths(-1).ToString("yyyyMM")
For Each dr In Rates.Select("id<=" & targetId & " and id>199412")
If dr("id") = targetId Then
dtTarget.Rows.Add(dr("id"), 0)
ElseIf dr("id") = targetIdAnt Then
dtTarget.Rows.Add(dr("id"), 1)
Else
Dim intermediates =
Rates.Select("id>" & dr("id") & " and id<" & targetId).Select(
Function(ldr) New With {
.id = ldr.Field(Of Integer)("id"),
.rate = ldr.Field(Of Decimal)("rate")}
).ToArray
dtTarget.Rows.Add(
dr("id"),
1 + intermediates.Sum(
Function(i) i.rate))
End If
Next
Return dtTarget
End Function
My question is how can I put this as a query in my database so it can be used dynamically by other queries which would use these accumulated rates to update debts to any given date.
Thank you very much!
EDIT
I managed to make a query that returns the data I want, now I just don't know how to encapsulate it so that it can be called from another query passing any id as argument (here I did it using a SET ... statement):
SET #targetId=201708;
SELECT
id AS id_acum,
COALESCE(1 + (SELECT
SUM(taxa)
FROM
tableSelic AS ts
WHERE
id > id_acum AND id < #targetId
LIMIT 1),
IF(id >= #targetId, 0, 1)) AS acum
FROM
tableSelic
WHERE id>199412;
That's because I'm pretty new to MySQL, I'm used to MS-Access where parametrized queries are very straightfoward to create.
For example:
DROP TABLE IF EXISTS my_table;
CREATE TABLE my_table
(id INT NOT NULL PRIMARY KEY
,rate DECIMAL(5,2) NOT NULL
);
INSERT INTO my_table VALUES
(201704,0.79),
(201705,0.93),
(201706,0.81),
(201707,0.80),
(201708,0.14);
SELECT *
, CASE WHEN #flag IS NULL THEN #i:=1 ELSE #i:=#i+rate END i
, #flag:=1 flag
FROM my_table
, (SELECT #flag:=null,#i:=0) vars
ORDER
BY id DESC;
+--------+------+-------------+-------+------+------+
| id | rate | #flag:=null | #i:=0 | i | flag |
+--------+------+-------------+-------+------+------+
| 201708 | 0.14 | NULL | 0 | 1 | 1 |
| 201707 | 0.80 | NULL | 0 | 1.80 | 1 |
| 201706 | 0.81 | NULL | 0 | 2.61 | 1 |
| 201705 | 0.93 | NULL | 0 | 3.54 | 1 |
| 201704 | 0.79 | NULL | 0 | 4.33 | 1 |
+--------+------+-------------+-------+------+------+
5 rows in set (0.00 sec)
Ok, I made it with a function:
CREATE FUNCTION `AccumulatedRates`(start_id integer, target_id integer) RETURNS decimal(6,2)
BEGIN
DECLARE select_var decimal(6,2);
SET select_var = (
SELECT COALESCE(1 + (
SELECT SUM(rate)
FROM tableRates
WHERE id > start_id AND id < target_id LIMIT 1
), IF(id >= unto, 0, 1)) AS acum
FROM tableRates
WHERE id=start_id);
RETURN select_var;
END
And them a simple query:
SELECT *, AccumulatedRates(id,#present_id) as acum FROM tableRates;
where #present_id is passed as parameter.
Thanks to all, anyway!

Two methods of performing cohort analysis in MySQL using joins

I make a cohort analysis processor. Input parameters: time range and step, condition (initial event) to exctract cohorts, additional condition (retention event) to check after each N hours/days/months. Output parameters: cohort analysis grid, like this:
0h | 16h | 32h | 48h | 64h | 80h | 96h |
cohort #00 15 | 6 | 4 | 1 | 1 | 2 | 2 |
cohort #01 1 | 35 | 8 | 0 | 2 | 0 | 1 |
cohort #02 0 | 3 | 31 | 11 | 5 | 3 | 0 |
cohort #03 0 | 0 | 4 | 27 | 7 | 6 | 2 |
cohort #04 0 | 1 | 1 | 4 | 29 | 4 | 3 |
Basically:
fetch cohorts: unique users who did something 1 in every period from time_begin every time_step.
find how many of them (in each cohort) did something 2 after N seconds, N*2 seconds, N*3, and so on until now.
In short - I have 2 solutions. One works too slow and includes a heavy select with joins for each data step: 1 day, 2 day, 3 day, etc. I want to optimize it by joining result for every data step to cohorts - and it's the second solution. It looks like it works but I'm not sure it's the best way and that it will give the same result even if cohorts will intersect. Please check it out.
Here's the whole story.
I have a table of > 100,000 events, something like this:
#user-id, timestamp, event_name
events_view (uid varchar(64), tm int(11), e varchar(64))
example input row:
"user_sampleid1", 1423836540, "level_end:001:win"
To make a cohort analisys first I extract cohorts: for example, users, who send special event '1st_launch' in 10 hour periods starting from 2015-02-13 and ending with 2015-02-16. All code in this post is simplified and shortened to see the idea.
DROP TABLE IF EXISTS tmp_c;
create temporary table tmp_c (uid varchar(64), tm int(11), c int(11) );
set beg = UNIX_TIMESTAMP('2015-02-13 00:00:00');
set en = UNIX_TIMESTAMP('2015-02-16 00:00:00');
select min(tm) into t_start from events_view ;
select max(tm) into t_end from events_view ;
if beg < t_start then
set beg = t_start;
end if;
if en > t_end then
set en = t_end;
end if;
set period = 3600 * 10;
set cnt_c = ceil((en - beg) / period) ;
/*works quick enough*/
WHILE i < cnt_c DO
insert into tmp_c (
select uid, min(tm), i from events_view where
locate("1st_launch", e) > 0 and tm > (beg + period * i)
AND tm <= (beg + period * (i+1)) group by uid );
SET i = i+1;
END WHILE;
Cohorts may consist the same user ids, though usually one user is exist only in one cohort. And in each cohort users are unique.
Now I have temp table like this:
user_id | 1st timestamp | cohort_no
uid1 1423836540 0
uid2 1423839540 0
uid3 1423841160 1
uid4 1423841460 2
...
uidN 1423843080 M
Then I need to again divide time range on periods and calculate for each period how many users from each cohort have sent event "level_end:001:win".
For each small period I select all unique users who have sent "level_end:001:win" event and left join them to tmp_c cohorts table. So I have something like this:
user_id | 1st timestamp | cohort_no | user_id | other fields...
uid1 1423836540 0 uid1
uid2 1423839540 0 null
uid3 1423841160 1 null
uid4 1423841460 2 uid4
...
uidN 1423843080 M null
This way I see how many users from my cohorts are in those who have sent "level_end:001:win", exclude not found by where clause: where t2.uid is not null.
Finally I perform grouping and have counts of users in each cohort, who have sent "level_end:001:win" in this particluar period.
Here's the code:
DROP TABLE IF EXISTS tmp_res;
create temporary table tmp_res (uid varchar(64) CHARACTER SET cp1251 NOT NULL, c int(11), cnt int(11) );
set i = 0;
set cnt_c = ceil((t_end - beg) / period) ;
WHILE i < cnt_c DO
insert into tmp_res
select concat(beg + period * i, "_", beg + period * (i+1)), c, count(distinct(uid)) from
(select t1.uid, t1.c from tmp_c t1 left join
(select uid, min(tm) from events_view where
locate("level_end:001:win", e) > 0 and
tm > (beg + period * i) AND tm <= (beg + period * (i+1)) group by uid ) t2
on t1.uid = t2.uid where t2.uid is not null) t3
group by c;
SET i = i+1;
END WHILE;
/*getting result of the first method: tooo slooooow!*/
select * from tmp_res;
The result I've got (it's ok that some cohorts are not appear on some periods):
"1423832400_1423890000","1","35"
"1423832400_1423890000","2","3"
"1423832400_1423890000","3","1"
"1423832400_1423890000","4","1"
"1423890000_1423947600","1","21"
"1423890000_1423947600","2","50"
"1423890000_1423947600","3","2"
"1423947600_1424005200","1","9"
"1423947600_1424005200","2","24"
"1423947600_1424005200","3","70"
"1423947600_1424005200","4","6"
"1424005200_1424062800","1","7"
"1424005200_1424062800","2","15"
"1424005200_1424062800","3","21"
"1424005200_1424062800","4","32"
"1424062800_1424120400","1","7"
"1424062800_1424120400","2","13"
"1424062800_1424120400","3","24"
"1424062800_1424120400","4","18"
"1424120400_1424178000","1","10"
"1424120400_1424178000","2","12"
"1424120400_1424178000","3","18"
"1424120400_1424178000","4","14"
"1424178000_1424235600","1","6"
"1424178000_1424235600","2","7"
"1424178000_1424235600","3","9"
"1424178000_1424235600","4","12"
"1424235600_1424293200","1","6"
"1424235600_1424293200","2","8"
"1424235600_1424293200","3","9"
"1424235600_1424293200","4","5"
"1424293200_1424350800","1","5"
"1424293200_1424350800","2","3"
"1424293200_1424350800","3","11"
"1424293200_1424350800","4","10"
"1424350800_1424408400","1","8"
"1424350800_1424408400","2","5"
"1424350800_1424408400","3","7"
"1424350800_1424408400","4","7"
"1424408400_1424466000","2","6"
"1424408400_1424466000","3","7"
"1424408400_1424466000","4","3"
"1424466000_1424523600","1","3"
"1424466000_1424523600","2","4"
"1424466000_1424523600","3","8"
"1424466000_1424523600","4","2"
"1424523600_1424581200","2","3"
"1424523600_1424581200","3","3"
It works but it takes too much time to process because there are many queries here instead of one, so I need to rewrite it.
I think it can be rewritten with joins, but I'm still not sure how.
I decided to make a temporary table and write period boundaries in it:
DROP TABLE IF EXISTS tmp_times;
create temporary table tmp_times (tm_start int(11), tm_end int(11));
set cnt_c = ceil((t_end - beg) / period) ;
set i = 0;
WHILE i < cnt_c DO
insert into tmp_times values( beg + period * i, beg + period * (i+1));
SET i = i+1;
END WHILE;
Then I get periods-to-events mapping (user_id + timestamp represent particular event) to temp table and left join it to cohorts table and group the result:
SELECT Concat(tm_start, "_", tm_end) per,
t1.c coh,
Count(DISTINCT( t2.uid ))
FROM tmp_c t1
LEFT JOIN (SELECT *
FROM tmp_times t3
LEFT JOIN (SELECT uid,
tm
FROM events_view
WHERE Locate("level_end:101:win", e) > 0)
t4
ON ( t4.tm > t3.tm_start
AND t4.tm <= t3.tm_end )
WHERE t4.uid IS NOT NULL
ORDER BY t3.tm_start) t2
ON t1.uid = t2.uid
WHERE t2.uid IS NOT NULL
GROUP BY per,
coh
ORDER BY per,
coh;
In my tests this returns the same result as method #1. I can't check the result manually, but I understand how method #1 work more and as far I can see it gives what I want. Method #2 is faster, but I'm not sure it's the best way and it will give the same result even if cohorts will intersect.
Maybe there are well-known common methods to perform a cohort analysis in SQL? Is method #1 I use more reliable than method #2? I work with joins not that often, that's why still do not fully understand joins magic yet.
Method #2 looks like pure magic, and I used to not believe in what I don't understand :)
Thanks for answers!

get greatest Id from multiple table and remove duplicate value

i got some problem obtaining value from my db,this is my first db
ID_HARGA ID_USER ID_BAG_PEMASARAN ID_ITEM HARGA ENTRY_DATE
1 9 1 3 1000000 2015-01-11 09:55:27
2 9 1 5 2000000 2015-01-13 07:19:10
5 9 1 3 3000000 2015-01-13 13:47:32
6 9 1 43 7000000 2015-01-13 13:49:49
13 9 1 50 3000000 2015-01-13 17:56:54
37 9 1 50 100 2015-01-19 09:08:20
this is the second one
ID_ITEM NAMA_ITEM SPEC_ITEM MASA_GARANSI STATUS_ITEM DIR_IMAGE
3 water heater emas bagus sekali selamanya 1
5 water heater tembaga bagus super selamanya 1
43 water hater heater seupo 3 1 water_22.jpg
50 tankkk 50 Liter 1 1 water_2.jpg
i know there is some null value just igniore it, i already did this
public function get_item()
{
//menghitung jumlah varian barang yang ada
$querycounter = $this->db->query('select * from ITEM WHERE STATUS_ITEM = 1');
$counter = $querycounter->num_rows();
$this->db->select('ITEM.ID_ITEM,NAMA_ITEM,SPEC_ITEM,HARGA,DIR_IMAGE');
$this->db->from('ITEM');
$this->db->join('HARGA', 'ITEM.ID_ITEM = HARGA.ID_ITEM','left');
// $this->db->order_by('ENTRY_DATE','DESC');
//$this->db->limit($counter);
$this->db->where('STATUS_ITEM',1);
//$query = $this->db->query('select from ITEM i join HARGA h on i.ID_ITEM = h.ID_ITEM order by h.ENTRY_DATE ASC');
$query = $this->db->get();
return $query->result_array();
//$this->db->select('nama_item','spec_item');
//$query = $this->db->get('item');
}
what i want is,just ignore dir_image
ID_ITEM NAMA_ITEM HARGA DIR_IMAGE
3 water heater emas 1000000
5 water heater tembaga 3000000
43 water hater 7000000
50 tankk 100
ID_item with biggest number not the old , so everytime i am insert into id_harga with ID_ITEM
the result always the newest HARGA
Thx before
There is a little confession that, do you want to get the maximum hagra, or newest inserted data (identifiable by HAGRA_ID if its auto increment by 1) in Hagra table. Your Code should be like
$this->db->select('ITEM.ID_ITEM,NAMA_ITEM,SPEC_ITEM,max(HARGA),DIR_IMAGE');
$this->db->from('ITEM');
$this->db->join('HARGA', 'ITEM.ID_ITEM = HARGA.ID_ITEM','left');
$this->db->where('STATUS_ITEM',1);
$this->db->group_by('item.item_id');
This will join your tables and select maximum value of Hagra. To use Max function, you need to use group_by(CodeIgniter Documentation) clause, other wise it will show only the single maximum value. Group_by will help to show maximum value of each group (each item);
If you want to get the newest inserted data, you can use the same technique using
$this->db->select('MAX(HAGRA_ID),ITEM.ID_ITEM,NAMA_ITEM,SPEC_ITEM,HARGA,DIR_IMAGE');