How to avoid duplicate data scanning while running athena query - mysql

I have two query to calculate some attributes from table- 'agg_table'.The second one is basically to find out median value grouped by msgdate.My expected output should have these 5 fields:
msgdate,avg-Total,avg-duration,stddev and median. Currently I am doing by using UNION which works fine. I will execute this query in AWS Athena. To calculate median since the second query is accessing agg_data again, the data scan is being doubled, lets say input data size is 4 mb and in the Athena history page I can see data scanned is 8 mb.
I want to avoid the data scan in second time to save cost. Can you please help me to acheive this by calling agg_data table one time only?
Query 1: To calculate avg-Total,avg-duration,stddev
SELECT b.msgdate1 as msgdate,ROUND(b.avrg,3) AS avg-Total,
ROUND(AVG(b.duration),3) AS avg-duration,ROUND(b.stdv,3) AS stddev
FROM
(
SELECT AVG(a2.duration) OVER(PARTITION BY a2.msgdate) AS avrg, a2.duration as duration,a2.msgdate msgdate1,
CASE
WHEN stddev(a2.duration) OVER(PARTITION BY a2.msgdate) IS NULL THEN 0
ELSE stddev(a2.duration) OVER(PARTITION BY a2.msgdate)
END AS stdv
FROM (
agg_data
) a2
) AS b
Query 2: To calculate median
WITH RankedTable AS
(
SELECT msgdate, duration,
ROW_NUMBER() OVER (PARTITION BY msgdate ORDER BY duration) AS Rnk,
COUNT(*) OVER (PARTITION BY msgdate) AS Cnt
FROM agg_data
)
SELECT msgdate,duration as median
FROM RankedTable
WHERE Rnk = Cnt / 2 + 1 or Cnt=1

I'm sure there's some trick that could do what you ask, but it will not be the easiest thing to do with all the window functions – combining these is always complicated.
If you can live with an approximation you could use the approx_percentile function – approx_percentile(column, 0.5) will be an approximation of the median. This can be used in your first query, avoiding the need for the second.

Related

finding a percentile value in mysql 5.7? [duplicate]

I have a table which contains thousands of rows and I would like to calculate the 90th percentile for one of the fields, called 'round'.
For example, select the value of round which is at the 90th percentile.
I don't see a straightforward way to do this in MySQL.
Can somebody provide some suggestions as to how I may start this sort of calculation?
Thank you!
First, lets assume that you have a table with a value column. You want to get the row with 95th percentile value. In other words, you are looking for a value that is bigger than 95 percent of all values.
Here is a simple answer:
SELECT * FROM
(SELECT t.*, #row_num :=#row_num + 1 AS row_num FROM YOUR_TABLE t,
(SELECT #row_num:=0) counter ORDER BY YOUR_VALUE_COLUMN)
temp WHERE temp.row_num = ROUND (.95* #row_num);
Compare solutions:
Number of seconds it took on my server to get 99 percentile of 1.3 million rows:
LIMIT x,y with index and no where: 0.01 seconds
LIMIT x,y with no where: 0.7 seconds
LIMIT x,y with where: 2.3 seconds
Full scan with no where: 1.6 seconds
Full scan with where: 5.7 seconds
Fastest solution for large tables using LIMIT x,y ():
Get count of values: SELECT COUNT(*) AS cnt FROM t
Get nth value, where n = (cnt - 1) * (1 - 0.95) : SELECT k FROM t ORDER BY k DESC LIMIT n,1
This solution requires two queries, because mysql does not support specifying variables in LIMIT clause, except for stored procedures (can be optimized with stored procedure). Usually additional query overhead is very low
This solution can be further optimized if you add index to k column and do not use complex where clauses (like 0.01 second for table with 1 million rows, because sorting is not needed).
Implementation example in PHP (can calculate percentile not only of columns, but also of expressions):
function get_percentile($table, $where, $expr, $percentile) {
if ($where) $subq = "WHERE $where";
else $subq = "";
$r = query("SELECT COUNT(*) AS cnt FROM $table $subq");
$w = mysql_fetch_assoc($r);
$num = abs(round(($w['cnt'] - 1) * (100 - $percentile) / 100.0));
$q = "SELECT ($expr) AS prcres FROM $table $subq ORDER BY ($expr) DESC LIMIT $num,1";
$r = query($q);
if (!mysql_num_rows($r)) return null;
$w = mysql_fetch_assoc($r);
return $w['prcres'];
}
// Usage example
$time = get_percentile(
"state", // table
"service='Time' AND cnt>0 AND total>0", // some filter
"total/cnt", // expression to evaluate
80); // percentile
The SQL standard supports the PERCENTILE_DISC and PERCENTILE_CONT inverse distribution functions for precisely this job. Implementations are available in at least Oracle, PostgreSQL, SQL Server, Teradata. Unfortunately not in MySQL. But you can emulate PERCENTILE_DISC in MySQL 8 as follows:
SELECT DISTINCT first_value(my_column) OVER (
ORDER BY CASE WHEN p <= 0.9 THEN p END DESC /* NULLS LAST */
) x,
FROM (
SELECT
my_column,
percent_rank() OVER (ORDER BY my_column) p,
FROM my_table
) t;
This calculates the PERCENT_RANK for each row given your my_column ordering, and then finds the last row for which the percent rank is less or equal to the 0.9 percentile.
This only works on MySQL 8+, which has window function support.
I was trying to solve this for quite some time and then I found the following answer. Honestly brilliant. Also quite fast even for big tables (the table where I used it contained approx 5 mil records and needed a couple of seconds).
SELECT
CAST(SUBSTRING_INDEX(SUBSTRING_INDEX( GROUP_CONCAT(field_name ORDER BY
field_name SEPARATOR ','), ',', 95/100 * COUNT(*) + 1), ',', -1) AS DECIMAL)
AS 95th Per
FROM table_name;
As you can imagine just replace table_name and field_name with your table's and column's names.
For further information check Roland Bouman's original post
In MySQL 8 there is the ntile window function you can use:
SELECT SomeTable.ID, SomeTable.Round
FROM SomeTable
JOIN (
SELECT SomeTable, (NTILE(100) OVER w) AS Percentile
FROM SomeTable
WINDOW w AS (ORDER BY Round)
) AS SomeTablePercentile ON SomeTable.ID = SomeTablePercentile.ID
WHERE Percentile = 90
LIMIT 1
https://dev.mysql.com/doc/refman/8.0/en/window-function-descriptions.html#function_ntile
http://www.artfulsoftware.com/infotree/queries.php#68
SELECT
a.film_id ,
ROUND( 100.0 * ( SELECT COUNT(*) FROM film AS b WHERE b.length <= a.length ) / total.cnt, 1 )
AS percentile
FROM film a
CROSS JOIN (
SELECT COUNT(*) AS cnt
FROM film
) AS total
ORDER BY percentile DESC;
This can be slow for very large tables
As pert Tony_Pets answer, but as I noted on a similar question: I had to change the calculation slightly, for example the 90th percentile - "90/100 * COUNT(*) + 0.5" instead of "90/100 * COUNT(*) + 1". Sometimes it was skipping two values past the percentile point in the ordered list, instead of picking the next higher value for the percentile. Maybe the way integer rounding works in mysql.
ie:
.... SUBSTRING_INDEX(SUBSTRING_INDEX( GROUP_CONCAT(fieldValue ORDER BY fieldValue SEPARATOR ','), ',', 90/100 * COUNT(*) + 0.5), ',', -1) as 90thPercentile ....
The most common definition of a percentile is a number where a certain percentage of scores fall below that number. You might know that you scored 67 out of 90 on a test. But that figure has no real meaning unless you know what percentile you fall into. If you know that your score is in the 95th percentile, that means you scored better than 95% of people who took the test.
This solution works also with the older MySQL 5.7.
SELECT *, #row_num as numRows, 100 - (row_num * 100/(#row_num + 1)) as percentile
FROM (
select *, #row_num := #row_num + 1 AS row_num
from (
SELECT t.subject, pt.score, p.name
FROM test t, person_test pt, person p, (
SELECT #row_num := 0
) counter
where t.id=pt.test_id
and p.id=pt.person_id
ORDER BY score desc
) temp
) temp2
-- optional: filter on a minimal percentile (uncomment below)
-- having percentile >= 80
An alternative solution that works in MySQL 8: generate a histogram of your data:
ANALYZE TABLE my_table UPDATE HISTOGRAM ON my_column WITH 100 BUCKETS;
And then just select the 95th record from information_schema.column_statistics:
SELECT v,c FROM information_schema.column_statistics, JSON_TABLE(histogram->'$.buckets',
'$[*]' COLUMNS(v VARCHAR(60) PATH '$[0]', c double PATH '$[1]')) hist
WHERE column_name='my_column' LIMIT 95,1
And voila! You will still need to decide whether you take the lower or upper limit of the percentile, or perhaps take an average - but that is a small task now. Most importantly - this is very quick, once the histogram object is built.
Credit for this solution: lefred's blog.

How do I get the aggregate of last n values in SQL using lead/lag

I am looking at aggregating the last over the three rows ( i.e trying the max value of the column in last 3 three rows). Is there a way we can do using LAG and MAX together. I was able to achieve by creating a function and using it, but it is not efficient. What is the better way.
select symbol, td_timestamp, open, vol_range, fn_getmaxvalue(high, hihi, hihi2) as highest
from
(select symbol, td_timestamp, open, high, low,
volume-lag(volume,2) over (partition by symbol order by td_timestamp ) vol_chg,
lag(high,1) over (partition by symbol order by td_timestamp ) hihi,
lag(high,2) over (partition by symbol order by td_timestamp ) hihi2
from tb_nfbnf where trade_date='2020-02-28' and processed_flg is null
order by symbol, td_timestamp)a
If I understand correctly, you want a running max. That would use a window frame clause in the window function:
select t.*,
max(high) over (partition by symbol
order by td_timestamp
rows between 2 preceding and current row
) as max_hi_3rows
from tb_nfbnf t
where trade_date = '2020-02-28' and processed_flg is null
order by symbol, td_timestamp;
** Changed from 2 rows preceding to 2 preceding **

Query Database Accurately Based on Timestamp

I am currently having an accuracy issue when querying price vs. time in a Google Big Query Dataset. What I would like is the price of an asset every five minutes, yet there are some assets that have an empty row for an exact minute.
For example, with VEN vs ICX which are two cryptocurrencies, there might be a time at which price data is not available for a specific second. In my query, I am querying a database for every 300 seconds and taking the price data, yet some assets don't have a timestamp for 5 minutes and 0 seconds. Thus, I would like the get the last known price: a good price to use would be 4 minutes and 58 seconds.
My query right now is:
SELECT MIN(price) AS PRICE, timestamp
FROM [coin_data]
WHERE coin="BTCUSD" AND TIMESTAMP_TO_SEC(timestamp) % 300 = 0
GROUP BY timestamp
ORDER BY timestamp ASC
This query results in this sort of gap in specific places:
Row((10339.25, datetime.datetime(2018, 2, 26, 21, 55, tzinfo=<UTC>)))
Row((10354.62, datetime.datetime(2018, 2, 26, 22, 0, tzinfo=<UTC>)))
Row((10320.0, datetime.datetime(2018, 2, 26, 22, 10[should be 5 for 5 min], tzinfo=<UTC>)))
This one should not be 10 in the last column as that is the minutes place and it should read 5 mins.
In order to select a row that has a 5 minute mark/timestamp if it exists, or the closest existing entry, you can use "(analytic) window functions"(uses OVER()) instead of aggregate functions(uses GROUP BY), as following:
group all rows into "separate" 5 minute groups
sort them by proximity to the desired time
select the first row from each partition.
Here I am using OVER clause to create the "window frames" and sorts the rows in them. Then RANK() numbers all rows in each window frame as they are sorted.
Standard SQL
WITH
data AS (
SELECT *,
CAST(FLOOR(UNIX_SECONDS(timestamp)/300) AS INT64) AS timegroup
FROM
`coin_data` )
SELECT min(price) as min_price, timestamp
FROM
(SELECT *, RANK() OVER(PARTITION BY timegroup ORDER BY timestamp ASC) AS rank
FROM data)
WHERE rank = 1
group by timestamp
ORDER BY timestamp ASC
Legacy SQL
SELECT MIN(price) AS min_price, timestamp
FROM (
SELECT *,
RANK() OVER(PARTITION BY timegroup ORDER BY timestamp ASC) AS rank,
FROM (
SELECT *,
INTEGER(FLOOR(TIMESTAMP_TO_SEC(timestamp)/300)) AS timegroup
FROM [coin_data]) AS data )
WHERE rank = 1
GROUP BY timestamp
ORDER BY timestamp ASC
It seems that you have many prices for the same time stamp in which case you may want to add another field to OVER clause.
OVER(PARTITION BY timegroup, exchange ORDER BY timestamp ASC)
Notes:
Consider migrating to Standard SQL, which is the preferred SQL dialect for querying data stored in BigQuery. You can do that on single query basis, so you don't have to migrate everything at the same time.
My idea was to provide a general query that would illustrate the principle so I don't filter for empty rows, because it's not clear if they are null or empty string and it's not really necessary for the answer.

SQL: get only sampled data from large dataset

So I get a large amount of data from server using this SQL:
SELECT value,DATE_FORMAT(`time`,'%Y-%m-%dT%H:%i:%sZ') AS `time`
FROM history WHERE :id=reference AND
(time BETWEEN :start AND :end) ORDER BY time LIMIT 100 ";
Limit is set to fixed 100 entries.
But in given time range there could be 5 000 entries.
Here's my goal: I want to sample these entries by time between each entry.
So for example this interval between each entry will be 60 seconds (let's say it is parameter), then I will receive 100 entries (from 5000), but there will be always one minute difference between each one of them.
E.g.
value1,14:40:40
value2,14:41:40
...
value100,16:20:40
Is this doable via SQL? Or do I have to parse through this large data with PHP?
If it is not doable just with SQL, is it possible to get this 100 entries equally spread across this 5000 entries? (so not by time, but I'd get fixed entry id1,id50,id100,id150,...,id5000). Again just with sql.
Thanks!
Just as Kristof sais in his answer: Order the rows and take each nth row by applying a row number. This is how it is done in MySQL:
select
rows.value,
date_format(rows.`time`,'%Y-%m-%dT%H:%i:%sZ') AS `time`
from
(
select
#row_number := #row_number + 1 as row_number,
history.*
from history
cross join (select #row_number := 0) as t
where reference = :id and `time` between :start and :end
order by `time`
) as rows
cross join
(
select count(*) as cnt
from history
where reference = :id and `time` between :start and :end
) as rowcount
where mod(rows.row_number - 1, ceil(rowcount.cnt / 100)) = 0;
And this is how the same would look in another dbms, Oracle for instance, using analytic functions:
select
rows.value,
to_char(rows."time",'yyyy-mm-dd hh24:mi:ss') AS "time"
from
(
select
row_number() over (order by "time") as rown,
count(*) over () as cnt,
history.*
from history
where reference = :id and "time" between :start and :end
) rows
where mod(rows.rown - 1, ceil(rows.cnt / 100)) = 0;
These queries result in 100 records or a little less, depending on how many rows the table contains exactly. You can also use TRUNCATE(rowcount.cnt,0) instead of CEIL(rowcount.cnt) in MySQL, thus getting hundred rows or a little more and additionally apply LIMIT 100 to get exactly 100 rows (provided there are at least 100 rows in the table).
What you could is select the rowNumber and calculate the modulo of that rowNumber.
Not sure how it would be done in mysql but t-sql goes like this :
SELECT ROW_NUMBER() over( order by idField) % 50 as selector, *
FROM history
WHERE selector = 1
This will count the rows and reset the counter every 50th record, giving you a spread out result.

Calculating the Median with Mysql

I'm having trouble with calculating the median of a list of values, not the average.
I found this article
Simple way to calculate median with MySQL
It has a reference to the following query which I don't understand properly.
SELECT x.val from data x, data y
GROUP BY x.val
HAVING SUM(SIGN(1-SIGN(y.val-x.val))) = (COUNT(*)+1)/2
If I have a time column and I want to calculate the median value, what do the x and y columns refer to?
I propose a faster way.
Get the row count:
SELECT CEIL(COUNT(*)/2) FROM data;
Then take the middle value in a sorted subquery:
SELECT max(val) FROM (SELECT val FROM data ORDER BY val limit #middlevalue) x;
I tested this with a 5x10e6 dataset of random numbers and it will find the median in under 10 seconds.
This will find an arbitrary percentile by replacing the COUNT(*)/2 with COUNT(*)*n where n is the percentile (.5 for median, .75 for 75th percentile, etc).
val is your time column, x and y are two references to the data table (you can write data AS x, data AS y).
EDIT:
To avoid computing your sums twice, you can store the intermediate results.
CREATE TEMPORARY TABLE average_user_total_time
(SELECT SUM(time) AS time_taken
FROM scores
WHERE created_at >= '2010-10-10'
and created_at <= '2010-11-11'
GROUP BY user_id);
Then you can compute median over these values which are in a named table.
EDIT: Temporary table won't work here. You could try using a regular table with "MEMORY" table type. Or just have your subquery that computes the values for the median twice in your query. Apart from this, I don't see another solution. This doesn't mean there isn't a better way, maybe somebody else will come with an idea.
First try to understand what the median is: it is the middle value in the sorted list of values.
Once you understand that, the approach is two steps:
sort the values in either order
pick the middle value (if not an odd number of values, pick the average of the two middle values)
Example:
Median of 0 1 3 7 9 10: 5 (because (7+3)/2=5)
Median of 0 1 3 7 9 10 11: 7 (because 7 is the middle value)
So, to sort dates you need a numerical value; you can get their time stamp (as seconds elapsed from epoch) and use the definition of median.
Finding median in mysql using group_concat
Query:
SELECT
IF(count%2=1,
SUBSTRING_INDEX(substring_index(data_str,",",pos),",",-1),
(SUBSTRING_INDEX(substring_index(data_str,",",pos),",",-1)
+ SUBSTRING_INDEX(substring_index(data_str,",",pos+1),",",-1))/2)
as median
FROM (SELECT group_concat(val order by val) data_str,
CEILING(count(*)/2) pos,
count(*) as count from data)temp;
Explanation:
Sorting is done using order by inside group_concat function
Position(pos) and Total number of elements (count) is identified. CEILING to identify position helps us to use substring_index function in the below steps.
Based on count, even or odd number of values is decided.
Odd values: Directly choose the element belonging to the pos using substring_index.
Even values: Find the element belonging to the pos and pos+1, then add them and divide by 2 to get the median.
Finally the median is calculated.
If you have a table R with a column named A, and you want the median of A, you can do as follows:
SELECT A FROM R R1
WHERE ( SELECT COUNT(A) FROM R R2 WHERE R2.A < R1.A ) = ( SELECT COUNT(A) FROM R R3 WHERE R3.A > R1.A )
Note: This will only work if there are no duplicated values in A. Also, null values are not allowed.
Simplest ways me and my friend have found out... ENJOY!!
SELECT count(*) INTO #c from station;
select ROUND((#c+1)/2) into #final;
SELECT round(lat_n,4) from station a where #final-1=(select count(lat_n) from station b where b.lat_n > a.lat_n);
Here is a solution that is easy to understand. Just replace Your_Column and Your_Table as per your requirement.
SET #r = 0;
SELECT AVG(Your_Column)
FROM (SELECT (#r := #r + 1) AS r, Your_Column FROM Your_Table ORDER BY Your_Column) Temp
WHERE
r = (SELECT CEIL(COUNT(*) / 2) FROM Your_Table) OR
r = (SELECT FLOOR((COUNT(*) / 2) + 1) FROM Your_Table)
Originally adopted from this thread.