Iterate through entries of column and calculate with them - mysql

I need to iterate through the entries of one specific column and calculate with the entries. My table looks like that:
FreeKB (Column to iterate) | FileSystem | Date | System |
---------------------------|------------|-----------|---------|
5000 | TestFS | 2017-03-28| TestSys |
7000 | TestFS | 2017-03-27| TestSys |
3000 | TestFS | 2017-03-26| TestSys |
10000 | TestFS | 2017-03-25| TestSys |
9000 | TestFS | 2017-03-24| TestSys |
8000 | TestFS | 2017-03-23| TestSys |
10000 | TestFS | 2017-03-22| TestSys |
11000 | TestFS | 2017-03-21| TestSys |
The question is: How do I iterate through all the entries of "FreeKB" and calculate with them? To be more specific: I want to calculate the median of all entries, where the amount of FreeKB is shrinking. I'm familiar with scripting and a little bit c++ but I'm a newbie to SQL.
Sorry if the answer seems obvious...
Greetings
Edit:
For the result, I want to iterate somehow through the entries of the last 7 days for each single FileSystem and System in the table, look where the amount of FreeKB shrinks, and calculate the median of the shrinking-numbers. Example: From 2017-03-27 to 2017-03-28 the amount of FreeKB shrinks by 2000 KB, 25th to 26th by 7000, 22th to 23th by 2000. I want to get the median of the numbers and calculate when the FileSystem might become full for an E-Mail

You can use CrossApply in SQL Server
Here are some link to get you started with
http://weblogs.sqlteam.com/jeffs/archive/2007/10/18/sql-server-cross-apply.aspx
https://www.mssqltips.com/sqlservertip/1958/sql-server-cross-apply-and-outer-apply/
https://technet.microsoft.com/en-us/library/ms175156(v=sql.105).aspx
Here is sample Code
CREATE TABLE #test (FreeKb int,FileSystem varchar(50),[Date] Datetime)
INSERT INTO #test
SELECT 1001,'TestFS','2016/12/14' UNION
SELECT 1111,'TestFS','2017/01/01' UNION
SELECT 1223,'TestFS','2017/01/15' UNION
SELECT 1233,'TestFS','2017/01/02' UNION
SELECT 1321,'TestFS','2017/01/31' UNION
SELECT 1400,'TestFS','2016/12/12' UNION
SELECT 1456,'TestFS','2017/03/13'
SELECT a.*,b.newColumn FROM #test a
cross apply(
SELECT a.FreeKB/(SELECT count(FreeKB) from #test)as NewColumn
)b

Related

laravel group by date in join query to find sum of values

I am looking for laravel developer to solve a simple issue. I have 3 tables that I am joining to get data. Model data is like this:
date | order number | amount
I need to group by date and find the sum of amount. Like this:
date | order number | amount
12/06/2022 | ask20 | 150
12/06/2022 | ask20 | 50
13/06/2022 | ask21 | 120
15/06/2022 | ask20 | 110
15/06/2022 | ask23 | 10
16/06/2022 | ask20 | 30
Now, I need to group by date to get the value like this:
date | order number | amount
12/06/2022 | ask20 | 200 (added value)
13/06/2022 | ask21 | 120
15/06/2022 | ask20 | 110 (not added as the order number is different)
15/06/2022 | ask23 | 10
16/06/2022 | ask20 | 30
Remember, I am getting this data by joining 3 tables, Can anyone help solve this?
This seems a simple SUM function -
SELECT date, order_number, SUM(amount)
FROM <YOUR BIGGER QUERY..>
GROUP BY date, order_number

Detecting variations in a data set

I have a data set with this structure:
ContractNumber | MonthlyPayment | Duration | StartDate | EndDate
One contract number can occur many times as this data set is a consolidation of different reports with the same structure.
Now I want to filter / find the contract numbers in which MonthlyPayment and/or Duration and/or StartDate and/or EndDate differ.
Example (note that Contract Number is not a Primary key):
ContractNumber | MonthlyPayment | Duration | StartDate | EndDate
001 | 500 | 12 | 01.01.2015 | 31.12.2015
001 | 500 | 12 | 01.01.2015 | 31.12.2015
001 | 500 | 12 | 01.01.2015 | 31.12.2015
002 | 1500 | 24 | 01.01.2014 | 31.12.2017
002 | 1500 | 24 | 01.01.2014 | 31.12.2017
002 | 1500 | 24 | 01.01.2014 | 31.12.2018
With this sample data set, I would need to retrieve 002 with a specific query. 001 is the the same and does not Change, but 002 changes over time.
Besides of writing a VBA script running over an Excel, I don't have any solid idea on how to solve this with SQL
My first idea would be a SQL Approach with grouping, where same values are grouped together, but not the different ones. I am currently experimenting on this one. My attempt is currently:
1.) Have the usual table
2.) Create a second table / query with this structure:
ContractNumber | AVG(MonthlyPayment) | AVG(Duration) | AVG(StartDate) | AVG(EndDate)
Which I created with Grouping.
E.G.
Table 1.)
ContractNumber | MonthlyPayment
1 | 10
1 | 10
1 | 20
2 | 300
2 | 300
2 | 300
Table 2.)
ContractNumber | AVG(MonthlyPayment)
1 | 13.3
2 | 300
3) Now I want to find the distinct contract number where - in this example only the MonthlyPayment - does not equal to the average (it should be the same - otherwise we have a variation which I need to find).
Do you have any idea how I could solve this? I would otherwise start writing a VBA or Python script. I have the data set in CSV, so for now I could also do it with MySQL, Power Bi or Excel.
I need to perform this Analysis once, so I would not Need a full approach, so the queries can be splitted into different steps.
Very appreciated! Thank you very much.
To find all contract numbers with differences, use:
select ContractNumber
from
(
select distinct ContractNumber, MonthlyPayment , Duration , StartDate , EndDate
from MyTable
) x
group by ContractNumber
having count(*) >1

Can't figure out a proper MySQL query

I have a table with the following structure:
id | workerID | materialID | date | materialGathered
Different workers contribute different amounts of different material per day. A single worker can only contribute once a day, but not necessarily every day.
What I need to do is to figure out which of them was the most productive and which of them was the least productive, while it is supposed to be measured as AVG() material gathered per day.
I honestly have no idea how to do that, so I'll appreciate any help.
EDIT1:
Some sample data
1 | 1 | 2013-01-20 | 25
2 | 1 | 2013-01-21 | 15
3 | 1 | 2013-01-22 | 17
4 | 1 | 2013-01-25 | 28
5 | 2 | 2013-01-20 | 23
6 | 2 | 2013-01-21 | 21
7 | 3 | 2013-01-22 | 17
8 | 3 | 2013-01-24 | 15
9 | 3 | 2013-01-25 | 19
Doesn't really matter how the output looks, to be honest. Maybe a simple table like that:
workerID | avgMaterialGatheredPerDay
And I didn't really attempt anything because I literally have no idea, haha.
EDIT2:
Any time period that is in the table (from earliest to latest date in the table) is considered.
Material doesn't matter at the moment. Only the arbitrary units in the materialGathered column matter.
As in your comments you say that we look at each worker and consider their avarage daily working skill, rather than checking which worked most in a given time, the answer is rather easy: Group by workerid to get a result record per worker, use AVG to get their avarage amount:
select workerid, avg(materialgathered) as avg_gathered
from work
group by workerid;
Now to the best and worst workers. These can be more than two. So you cannot just take the first or last record, but need to know the maximum and the minimum avg_gathered.
select max(avg_gathered) as max_avg_gathered, min(avg_gathered) as min_avg_gathered
from
(
select avg(materialgathered) as avg_gathered
from work
group by workerid
);
Now join the two queries to get all workers that worked the avarage minimum or maximum:
select work.*
from
(
select workerid, avg(materialgathered) as avg_gathered
from work
group by workerid
) as worker
inner join
(
select max(avg_gathered) as max_avg_gathered, min(avg_gathered) as min_avg_gathered
from
(
select avg(materialgathered) as avg_gathered
from work
group by workerid
)
) as worked on worker.avg_gathered in (worked.max_avg_gathered, worked.min_avg_gathered)
order by worker.avg_gathered;
There are other ways to do this. For example with HAVING avg(materialgathered) IN (select min(avg_gathered)...) OR avg(materialgathered) IN (select max(avg_gathered)...) instead of a join. The join is very effective though, because you need just one select for both min and max.

How to create a week, month, year summary of a database

I want to create an application which one is summary the values of each column. I have a table like this:
Each rows contains one goods
Date | Company_Name | Order_cost | Weight |
2013-05-15| Dunaferr | 310 | 1200 |
2013-05-18| Pentele | 220 | 1600 |
2013-05-25| Dunaferr | 310 | 1340 |
and what I exactly need is a table or view which contains the totals for the weights column for each week which is supposed to be extracted from the date column!
Something like that
company_name | week1 | week2 | week3 | week4 ...
dunaferr | 35000 | 36000 | 28000 | 3411
pentele | 34000 | 255000 | 3341 | 3433
Is there any way to do this?
I would do this in two steps:
First step complete an sql query getting a summary with a sum for weight with a group by for yearweek
SELECT Company_Name, YEARWEEK(Date), sum(weight) FROM table GROUP BY Company_Name, YEARWEEK(Date)
http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_yearweek.
Second step would be to process this into the required format in the application year.
If you absolutely have to do this in the database, then you are looking at implementing a pivot table, which has previously been covered here: MySQL pivot table

Get rank within table for any number

I have a bidding system in place. The user enters how much he wants to bid, which then sends a request via ajax to a PHP script, which then gets what rank that bid would place under the existing bids, and then displays it back to the bidder. This allows him to increase his bid to get the rank he wants.
For example
+-----------+------------+
| bidder_id | bid_amount |
+-----------+------------+
| 1 | 20 |
| 2 | 20 |
| 3 | 30 |
| 5 | 40 |
| 6 | 10 |
+-----------+------------+
The user bids 15$, the query gets the rank as 5th.
How would this query look like? Is is possible to insert a fake row with the new user's bid and then order everything?
Something simple like this should do it;
SELECT COUNT(*)+1 rank
FROM bids
WHERE bid_amount > 15
An SQLfiddle to test with.