SQL to club records in sequence - mysql

I have data in MySQL table, my data looks like
Key, value
A 1
A 2
A 3
A 6
A 7
A 8
A 9
B 1
B 2
and I want to group it based on the continuous sequence. Data is sorted in the table.
Key, min, max
A 1 3
A 6 9
B 1 2
I tried googling it but could find any solution to it. Can someone please help me with this.

This is way easier with a modern DBMS that support window functions, but you can find the upper bounds by checking that there is no successor. In the same way you can find the lower bounds via absence of a predecessor. By combining the lowest upper bound for each lower bound we get the intervals.
select low.keyx, low.valx, min(high.valx)
from (
select t1.keyx, t1.valx from t t1
where not exists (
select 1 from t t2
where t1.keyx = t2.keyx
and t1.valx = t2.valx + 1
)
) as low
join (
select t3.keyx, t3.valx from t t3
where not exists (
select 1 from t t4
where t3.keyx = t4.keyx
and t3.valx = t4.valx - 1
)
) as high
on low.keyx = high.keyx
and low.valx <= high.valx
group by low.keyx, low.valx;
I changed your identifiers since value is a reserved world.
Using a window function is way more compact and efficient. If at all possible, consider upgrading to MySQL 8+, it is superior to 5.7 in so many aspects.
We can create a group by looking at the difference between valx and an enumeration of the vals, if there is a gap the difference increases. Then, we simply pick min and max for each group:
select keyx, min(valx), max(valx)
from (
select keyx, valx
, valx - row_number() over (partition by keyx order by valx) as grp
from t
) as tt
group by keyx, grp;
Fiddle

Related

Aggregating row values in MySQl or Snowflake

I would like to calculate the std dev. min and max of the mer_data array into 3 other fields called std_dev,min_mer and max_mer grouped by mac and timestamp.
This needs to be done without flattening the data as each mer_data row consists of 4000 float values and multiplying that with 700k rows gives a very high dimensional table.
The mer_data field is currently saved as varchar(30000) and maybe Json format might help, I'm not sure.
Input:
Output:
This can be done in Snowflake or MySQL.
Also, the query needs to be optimized so that it does not take much computation time.
While you don't want to split the data up, you will need to if you want to do it in pure SQL. Snowflake has no problems with such aggregations.
WITH fake_data(mac, mer_data) AS (
SELECT * FROM VALUES
('abc','43,44.25,44.5,42.75,44,44.25,42.75,43'),
('def','32.75,33.25,34.25,34.5,32.75,34,34.25,32.75,43')
)
SELECT f.mac,
avg(d.value::float) as avg_dev,
stddev(d.value::float) as std_dev,
MIN(d.value::float) as MIN_MER,
Max(d.value::float) as Max_MER
FROM fake_data f, table(split_to_table(f.mer_data,',')) d
GROUP BY 1
ORDER BY 1;
I would however discourage the use of strings in the grouping process, so would break it apart like so:
WITH fake_data(mac, mer_data, timestamp) AS (
SELECT * FROM VALUES
('abc','43,44.25,44.5,42.75,44,44.25,42.75,43', '01-01-22'),
('def','32.75,33.25,34.25,34.5,32.75,34,34.25,32.75,43', '02-01-22')
), boost_data AS (
SELECT seq8() as seq, *
FROM fake_data
), math_step AS (
SELECT f.seq,
avg(d.value::float) as avg_dev,
stddev(d.value::float) as std_dev,
MIN(d.value::float) as MIN_MER,
Max(d.value::float) as Max_MER
FROM boost_data f, table(split_to_table(f.mer_data,',')) d
GROUP BY 1
)
SELECT b.mac,
m.avg_dev,
m.std_dev,
m.MIN_MER,
m.Max_MER,
b.timestamp
FROM boost_data b
JOIN math_step m
ON b.seq = m.seq
ORDER BY 1;
MAC
AVG_DEV
STD_DEV
MIN_MER
MAX_MER
TIMESTAMP
abc
43.5625
0.7529703087
42.75
44.5
01-01-22
def
34.611111111
3.226141056
32.75
43
02-01-22
performance testing:
so using this SQL to make 70K rows of 4000 values each:
create table fake_data_tab AS
WITH cte_a AS (
SELECT SEQ8() as s
FROM TABLE(GENERATOR(ROWCOUNT =>70000))
), cte_b AS (
SELECT a.s, uniform(20::float, 50::float, random()) as v
FROM TABLE(GENERATOR(ROWCOUNT =>4000))
CROSS JOIN cte_a a
)
SELECT s::text as mac
,LISTAGG(v,',') AS mer_data
,dateadd(day,s,'2020-01-01')::date as timestamp
FROM cte_b
GROUP BY 1,3;
takes 79 seconds on a XTRA_SMALL,
now with that we can test the two solutions:
The second set of code (group by numbers, with a join):
WITH boost_data AS (
SELECT seq8() as seq, *
FROM fake_data_tab
), math_step AS (
SELECT f.seq,
avg(d.value::float) as avg_dev,
stddev(d.value::float) as std_dev,
MIN(d.value::float) as MIN_MER,
Max(d.value::float) as Max_MER
FROM boost_data f, table(split_to_table(f.mer_data,',')) d
GROUP BY 1
)
SELECT b.mac,
m.avg_dev,
m.std_dev,
m.MIN_MER,
m.Max_MER,
b.timestamp
FROM boost_data b
JOIN math_step m
ON b.seq = m.seq
ORDER BY 1;
takes 1m47s
the original group by strings/dates
SELECT f.mac,
avg(d.value::float) as avg_dev,
stddev(d.value::float) as std_dev,
MIN(d.value::float) as MIN_MER,
Max(d.value::float) as Max_MER,
f.timestamp
FROM fake_data_tab f, table(split_to_table(f.mer_data,',')) d
GROUP BY 1,6
ORDER BY 1;
takes 1m46s
Hmm, so leaving the "mac" as a number made the code very fast (~3s), and dealing with strings in ether way changed the data processed from 1.5GB for strings and 150MB for numbers.
If the numbers were in rows, not packed together like that, we can discuss how to do it in SQL.
In rows, GROUP_CONCAT(...) can construct a commalist like you show, and MIN(), STDDEV(), etc can do the other stuff.
If you continue to have the commalist, the do the rest of work in you app programming language. (It is very ugly to have SQL pick apart an array.)

query optimization for mysql

I have the following query which takes about 28 seconds on my machine. I would like to optimize it and know if there is any way to make it faster by creating some indexes.
select rr1.person_id as person_id, rr1.t1_value, rr2.t0_value
from (select r1.person_id, avg(r1.avg_normalized_value1) as t1_value
from (select ma1.person_id, mn1.store_name, avg(mn1.normalized_value) as avg_normalized_value1
from matrix_report1 ma1, matrix_normalized_notes mn1
where ma1.final_value = 1
and (mn1.normalized_value != 0.2
and mn1.normalized_value != 0.0 )
and ma1.user_id = mn1.user_id
and ma1.request_id = mn1.request_id
and ma1.request_id = 4 group by ma1.person_id, mn1.store_name) r1
group by r1.person_id) rr1
,(select r2.person_id, avg(r2.avg_normalized_value) as t0_value
from (select ma.person_id, mn.store_name, avg(mn.normalized_value) as avg_normalized_value
from matrix_report1 ma, matrix_normalized_notes mn
where ma.final_value = 0 and (mn.normalized_value != 0.2 and mn.normalized_value != 0.0 )
and ma.user_id = mn.user_id
and ma.request_id = mn.request_id
and ma.request_id = 4
group by ma.person_id, mn.store_name) r2
group by r2.person_id) rr2
where rr1.person_id = rr2.person_id
Basically, it aggregates data depending on the request_id and final_value (0 or 1). Is there a way to simplify it for optimization? And it would be nice to know which columns should be indexed. I created an index on user_id and request_id, but it doesn't help much.
There are about 4907424 rows on matrix_report1 and 335740 rows on matrix_normalized_notes table. These tables will grow as we have more requests.
First, the others are right about knowing better how to format your samples. Also, trying to explain in plain language what you are trying to do is also a benefit. With sample data and sample result expectations is even better.
However, that said, I think it can be significantly simplified. Your queries are almost completely identical with the exception of the one field of "final_value" = 1 or 0 respectively. Since each query will result in 1 record per "person_id", you can just do the average based on a CASE/WHEN AND remove the rest.
To help optimize the query, your matrix_report1 table should have an index on ( request_id, final_value, user_id ). Your matrix_normalized_notes table should have an index on ( request_id, user_id, store_name, normalized_value ).
Since your outer query is doing the average based on an per stores averages, you do need to keep it nested. The following should help.
SELECT
r1.person_id,
avg(r1.ANV1) as t1_value,
avg(r1.ANV0) as t0_value
from
( select
ma1.person_id,
mn1.store_name,
avg( case when ma1.final_value = 1
then mn1.normalized_value end ) as ANV1,
avg( case when ma1.final_value = 0
then mn1.normalized_value end ) as ANV0
from
matrix_report1 ma1
JOIN matrix_normalized_notes mn1
ON ma1.request_id = mn1.request_id
AND ma1.user_id = mn1.user_id
AND NOT mn1.normalized_value in ( 0.0, 0.2 )
where
ma1.request_id = 4
AND ma1.final_Value in ( 0, 1 )
group by
ma1.person_id,
mn1.store_name) r1
group by
r1.person_id
Notice the inner query is pulling all transactions for the final value as either a zero OR one. But then, the AVG is based on a case/when of the respective value for the normalized value. When the condition is NOT the 1 or 0 respectively, the result is NULL and is thus not considered when the average is computed.
So at this point, it is grouped on a per-person basis already with each store and Avg1 and Avg0 already set. Now, roll these values up directly per person regardless of the store. Again, NULL values should not be considered as part of the average computation. So, if Store "A" doesn't have a value in the Avg1, it should not skew the results. Similarly if Store "B" doesnt have a value in Avg0 result.

how to search for a given sequence of rows within a table in SQL Server 2008

The problem:
We have a number of entries within a table but we are only interested in the ones that appear in a given sequence. For example we are looking for three specific "GFTitle" entries ('Pearson Grafton','Woolworths (P and O)','QRX - Brisbane'), however they have to appear in a particular order to be considered a valid route. (See image below)
RowNum GFTitle
------------------------------
1 Pearson Grafton
2 Woolworths (P and O)
3 QRX - Brisbane
4 Pearson Grafton
5 Woolworths (P and O)
6 Pearson Grafton
7 QRX - Brisbane
8 Pearson Grafton
9 Pearson Grafton
So rows (1,2,3) satisfy this rule but rows (4,5,6) don't even though the first two entries (4,5) do.
I am sure there is a way to do this via CTE's but some help would be great.
Cheers
This is very simple using even good old tools :-) Try this quick-and-dirty solution, assuming your table name is GFTitles and RowNumber values are sequential:
SELECT a.[RowNum]
,a.[GFTitle]
,b.[GFTitle]
,c.[GFTitle]
FROM [dbo].[GFTitles] as a
join [dbo].[GFTitles] as b on b.RowNumber = a.RowNumber + 1
join [dbo].[GFTitles] as c on c.RowNumber = a.RowNumber + 2
WHERE a.[GFTitle] = 'Pearson Grafton' and
b.[GFTitle] = 'Woolworths (P and O)' and
c.[GFTitle] = 'QRX - Brisbane'
Assuming RowNum has neither duplicates nor gaps, you could try the following method.
Assign row numbers to the sought sequence's items and join the row set to your table on GFTitle.
For every match, calculate the difference between your table's row number and that of the sequence. If there's a matching sequence in your table, the corresponding rows' RowNum differences will be identical.
Count the rows per difference and return only those where the count matches the number of sequence items.
Here's a query that implements the above logic:
WITH SoughtSequence AS (
SELECT *
FROM (
VALUES
(1, 'Pearson Grafton'),
(2, 'Woolworths (P and O)'),
(3, 'QRX - Brisbane')
) x (RowNum, GFTitle)
)
, joined AS (
SELECT
t.*,
SequenceLength = COUNT(*) OVER (PARTITION BY t.RowNum - ss.RowNum)
FROM atable t
INNER JOIN SoughtSequence ss
ON t.GFTitle = ss.GFTitle
)
SELECT
RowNum,
GFTitle
FROM joined
WHERE SequenceLength = (SELECT COUNT(*) FROM SoughtSequence)
;
You can try it at SQL Fiddle too.

MySQL ranking in presence of indexes using variables

Using the classic trick of using #N=#N + 1 to get the rank of items on some ordered column. Now before ordering I need to filter out some values from the base table by inner joining it with some other table. So the query looks like this -:
SET #N=0;
SELECT
#N := #N + 1 AS rank,
fa.id,
fa.val
FROM
table1 AS fa
INNER JOIN table2 AS em
ON em.id = fa.id
AND em.type = "A"
ORDER BY fa.val ;
The issue is if I don't have an index on the em.type, then everything works fine but if I put an index on em.type then hell unleashes and the rank values instead of coming ordered by the val column comes in the order the rows are stored in the em table.
here are sample outputs -:
without index-:
rank id val
1 05F8C7 55050.000000
2 05HJDG 51404.733458
3 05TK1Z 46972.008208
4 05F2TR 46900.000000
5 05F349 44433.412847
6 06C2BT 43750.000000
7 0012X3 42000.000000
8 05MMPK 39430.399658
9 05MLW5 39054.046383
10 062D20 35550.000000
with index-:
rank id val
480 05F8C7 55050.000000
629 05HJDG 51404.733458
1603 05TK1Z 46972.008208
466 05F2TR 46900.000000
467 05F349 44433.412847
3534 06C2BT 43750.000000
15 0012X3 42000.000000
1109 05MMPK 39430.399658
1087 05MLW5 39054.046383
2544 062D20 35550.000000
I believe the use of indexes should be completely transparent and outputs should not be effected by it. Is this a bug in MySQL?
This "trick" was a bomb waiting to explode. A clever optimizer will evaluate a query as it sees fits, optimizing for speed - that's why it's called optimizer. I don't think this use of MySQL variables was documented to work as you expect it to work, but it was working.
Was working, up until recent improvements on the MariaDB optimizer. It will probably break as well in the mainstream MySQL as there are several improvements on the optimizer in the (yet to be released, still beta) 5.6 version.
What you can do (until MySQL implemented window functions) is to use a self-join and a grouping. Results will be consistent, no matter what future improvements are done in the optimizer. Downside is that that it may not be very efficient:
SELECT
COUNT(*) AS rank,
fa.id,
fa.val
FROM
table1 AS fa
INNER JOIN table2 AS em
ON em.id = fa.id
AND em.type = 'A'
INNER JOIN
table1 AS fa2
INNER JOIN table2 AS em2
ON em2.id = fa2.id
AND em2.type = 'A'
ON fa2.id <= fa.id
--- assuming that `id` is the Primary Key of the table
GROUP BY fa.id
ORDER BY fa.val ;

How can I return the numerical boxplot data of all results using 1 mySQL query?

[tbl_votes]
- id <!-- unique id of the vote) -->
- item_id <!-- vote belongs to item <id> -->
- vote <!-- number 1-10 -->
Of course we can fix this by getting:
the smallest observation (so)
the lower quartile (lq)
the median (me)
the upper quartile (uq)
and the largest observation (lo)
..one-by-one using multiple queries but I am wondering if it can be done with a single query.
In Oracle I can use COUNT OVER and RATIO_TO_REPORT, but this is not supported in mySQL.
For those who don't know what a boxplot is: http://en.wikipedia.org/wiki/Box_plot
Any help would be appreciated.
I've found a solution in PostgreSQL using using PL/Python.
However, I leave the question open in case someone else comes up with a solution in mySQL.
CREATE TYPE boxplot_values AS (
min numeric,
q1 numeric,
median numeric,
q3 numeric,
max numeric
);
CREATE OR REPLACE FUNCTION _final_boxplot(strarr numeric[])
RETURNS boxplot_values AS
$$
x = strarr.replace("{","[").replace("}","]")
a = eval(str(x))
a.sort()
i = len(a)
return ( a[0], a[i/4], a[i/2], a[i*3/4], a[-1] )
$$
LANGUAGE 'plpythonu' IMMUTABLE;
CREATE AGGREGATE boxplot(numeric) (
SFUNC=array_append,
STYPE=numeric[],
FINALFUNC=_final_boxplot,
INITCOND='{}'
);
Example:
SELECT customer_id as cid, (boxplot(price)).*
FROM orders
GROUP BY customer_id;
cid | min | q1 | median | q3 | max
-------+---------+---------+---------+---------+---------
1001 | 7.40209 | 7.80031 | 7.9551 | 7.99059 | 7.99903
1002 | 3.44229 | 4.38172 | 4.72498 | 5.25214 | 5.98736
Source: http://www.christian-rossow.de/articles/PostgreSQL_boxplot_median_quartiles_aggregate_function.php
Here is an example of calculation of the quartiles for e256 value ranges within e32 groups, an index on (e32, e256) in this case is a must:
SELECT
#group:=IF(e32=#group, e32, GREATEST(#index:=-1, e32)) as e32_,
MIN(e256) as so,
MAX(IF(lq_i=(#index:=#index+1), e256, NULL)) as lq,
MAX(IF(me_i=#index, e256, NULL)) as me,
MAX(IF(uq_i=#index, e256, NULL)) as uq,
MAX(e256) as lo
FROM (SELECT #index:=NULL, #group:=NULL) as init, test t
JOIN (
SELECT e32,
COUNT(*) as cnt,
(COUNT(*) div 4) as lq_i, -- lq value index within the group
(COUNT(*) div 2) as me_i, -- me value index within the group
(COUNT(*) * 3 div 4) as uq_i -- uq value index within the group
FROM test
GROUP BY e32
) as cnts
USING (e32)
GROUP BY e32;
If there is no need in groupings, the query will be slightly simplier.
P.S. test is my playground table of random values where e32 is the result of Python's int(random.expovariate(1.0) * 32), etc.
Well I can do it in two queries.
Do the first query to get the positions of the quartiles and then use the limit function to
get the answers in the second query.
mysql> select (select floor(count(*)/4)) as first_q, (select floor(count(*)/2) from
customer_data) as mid_pos, (select floor(count(*)/4*3) from customer_data) as third_q from
customer_data order by measure limit 1;
mysql> select min(measure),(select measure from customer_data order by measure limit 0,1) as firstq, (select measure from customer_data order by measure limit 5,1) as median, (select measure from customer_data order by measure limit 8,1) as last_q, max(measure) from customer_data;