Help with MYSQL Query Aggregating counts (alternative to three sub-queries) - mysql

I am trying to output the total content views from my stats table and group by the year... My stats table is INNODB and has 8M lines and growing...
The table is essentially ID, DATE, MAKE, IP, REFERRER (indexes on id,date,make)
Each entry has an auto-incremented ID, the entry date YYYY-MM-DD HH:MM:SS, and a product make like 'sony', 'panasonic' etc...
I am trying to make a query that does not kill my server that sums up the total content views per year and shows them in order from most viewed to least viewed...(for this year 2011) so that I can use that data to populate a JS chart comparing this year with the past years. I can do this with multiple queries and walking through arrays in PHP but I think there should be a way to get this in one query, but hell if I can figure it out.
Any ideas? Also, am I better to make three independent queries and deal with the results in PHP or can I get this into one query that is more MYSQL efficient.
The output I would like to see (although I cannot seem to make it do this), is simply
MAKE 2009Total 2010Total 2011Total
---- --------- --------- ---------
Panasonic 800 2345 3456
Sony 998 5346 2956
JVC 1300 1234 1944
Assume my table has data in it from 2009 to now, I need my array to contain one line per make...
Any help would be appreciated... I am amazed at how fast results like this come back from analytics tools and mine take about 75seconds on 4x Quad-core XEON RAID mysql server... this stats table is not being written to but once a day to dump in the previous day's stats so I am not sure why my 3 sep queries are so slow... hence my question... maybe a single query won't be any faster?
Anyway, any help would be appreciated and opinions about speeding up stats queries from a generic view stats table would be welcomed!

I have made an observation. Your query is requesting by year. You should do two things:
store the year
create a better index (product,year)
Here is how yuou can do so:
CREATE TABLE stats_entry_new LIKE stats_entry;
ALTER TABLE stats_entry_new ADD COLUMN entryyear SMALLINT NOT NULL AFTER date;
ALTER TABLE stats_entry_new ADD INDEX product_year_ndx (product,year);
ALTER TABLE stats_entry_new DISABLE KEYS;
INSERT INTO stats_entry_new
SELECT ID, DATE,YEAR(date),product,IP,REFERRER FROM state_entry;
ALTER TABLE stats_entry_new ENABLE KEYS;
ALTER TABLE stats_entry RENAME stats_entry_old;
ALTER TABLE stats_entry_new RENAME stats_entry;
Now the query looks like this:
SELECT A.product,B.cnt "2009Total",C.cnt "2010Total",D.cnt "2011Total"
FROM
(SELECT DISTINCT product FROM stats_entry) A
INNER JOIN
(SELECT product,COUNT(1) cnt FROM stats_entry WHERE entryyear=2009 GROUP BY product) B
USING (product)
(SELECT product,COUNT(1) cnt FROM stats_entry WHERE entryyear=2010 GROUP BY product) C
USING (product)
(SELECT product,COUNT(1) cnt FROM stats_entry WHERE entryyear=2011 GROUP BY product) D
USING (product);
Now to be fair, if you do not want to add a year to the table then you still have to make an index
ALTER TABLE stats_entry ADD INDEX product_date_ndx (product,date);
Your query looks like this now
SELECT A.product,B.cnt "2009Total",C.cnt "2010Total",D.cnt "2011Total"
FROM
(SELECT DISTINCT product FROM stats_entry) A
INNER JOIN
(SELECT product,COUNT(1) cnt FROM stats_entry
WHERE date >= '2009-01-01 00:00:00'
AND date <= '2009-12-31 23:59:59'
GROUP BY product) B
USING (product)
(SELECT product,COUNT(1) cnt FROM stats_entry
WHERE date >= '2010-01-01 00:00:00'
AND date <= '2010-12-31 23:59:59'
GROUP BY product) C
USING (product)
(SELECT product,COUNT(1) cnt FROM stats_entry
WHERE date >= '2011-01-01 00:00:00'
AND date <= '2011-12-31 23:59:59'
GROUP BY product) D
USING (product);
Give it a Try !!!

SELECT make,year(date) as year,sum(views)
FROM `stats `
group by make,year
o/p :
MAKE year sum
------- ------- ---------
Panasonic 2009 800
Panasonic 2010 2345
Panasonic 2011 3456
....
you can later seggregate on the php side.
or:
select make ,group_concat(cast(yr_views as char)) as year_views
from (SELECT make,concat(year(date),':',sum(views)) as yr_views
FROM `stats`
group by make,year(date))as make_views
group by make
o/p:
make year_views
------ ---------------
panasonic 2009:800,2010:2345,2011:3456
...
Later, explode at the PHP level & have the result.

Related

How to get dependent data using sql query [duplicate]

As the title suggests, I'd like to select the first row of each set of rows grouped with a GROUP BY.
Specifically, if I've got a purchases table that looks like this:
SELECT * FROM purchases;
My Output:
id
customer
total
1
Joe
5
2
Sally
3
3
Joe
2
4
Sally
1
I'd like to query for the id of the largest purchase (total) made by each customer. Something like this:
SELECT FIRST(id), customer, FIRST(total)
FROM purchases
GROUP BY customer
ORDER BY total DESC;
Expected Output:
FIRST(id)
customer
FIRST(total)
1
Joe
5
2
Sally
3
DISTINCT ON is typically simplest and fastest for this in PostgreSQL.
(For performance optimization for certain workloads see below.)
SELECT DISTINCT ON (customer)
id, customer, total
FROM purchases
ORDER BY customer, total DESC, id;
Or shorter (if not as clear) with ordinal numbers of output columns:
SELECT DISTINCT ON (2)
id, customer, total
FROM purchases
ORDER BY 2, 3 DESC, 1;
If total can be null, add NULLS LAST:
...
ORDER BY customer, total DESC NULLS LAST, id;
Works either way, but you'll want to match existing indexes
db<>fiddle here
Major points
DISTINCT ON is a PostgreSQL extension of the standard, where only DISTINCT on the whole SELECT list is defined.
List any number of expressions in the DISTINCT ON clause, the combined row value defines duplicates. The manual:
Obviously, two rows are considered distinct if they differ in at least
one column value. Null values are considered equal in this
comparison.
Bold emphasis mine.
DISTINCT ON can be combined with ORDER BY. Leading expressions in ORDER BY must be in the set of expressions in DISTINCT ON, but you can rearrange order among those freely. Example.
You can add additional expressions to ORDER BY to pick a particular row from each group of peers. Or, as the manual puts it:
The DISTINCT ON expression(s) must match the leftmost ORDER BY
expression(s). The ORDER BY clause will normally contain additional
expression(s) that determine the desired precedence of rows within
each DISTINCT ON group.
I added id as last item to break ties:
"Pick the row with the smallest id from each group sharing the highest total."
To order results in a way that disagrees with the sort order determining the first per group, you can nest above query in an outer query with another ORDER BY. Example.
If total can be null, you most probably want the row with the greatest non-null value. Add NULLS LAST like demonstrated. See:
Sort by column ASC, but NULL values first?
The SELECT list is not constrained by expressions in DISTINCT ON or ORDER BY in any way:
You don't have to include any of the expressions in DISTINCT ON or ORDER BY.
You can include any other expression in the SELECT list. This is instrumental for replacing complex subqueries and aggregate / window functions.
I tested with Postgres versions 8.3 – 15. But the feature has been there at least since version 7.1, so basically always.
Index
The perfect index for the above query would be a multi-column index spanning all three columns in matching sequence and with matching sort order:
CREATE INDEX purchases_3c_idx ON purchases (customer, total DESC, id);
May be too specialized. But use it if read performance for the particular query is crucial. If you have DESC NULLS LAST in the query, use the same in the index so that sort order matches and the index is perfectly applicable.
Effectiveness / Performance optimization
Weigh cost and benefit before creating tailored indexes for each query. The potential of above index largely depends on data distribution.
The index is used because it delivers pre-sorted data. In Postgres 9.2 or later the query can also benefit from an index only scan if the index is smaller than the underlying table. The index has to be scanned in its entirety, though. Example.
For few rows per customer (high cardinality in column customer), this is very efficient. Even more so if you need sorted output anyway. The benefit shrinks with a growing number of rows per customer.
Ideally, you have enough work_mem to process the involved sort step in RAM and not spill to disk. But generally setting work_mem too high can have adverse effects. Consider SET LOCAL for exceptionally big queries. Find how much you need with EXPLAIN ANALYZE. Mention of "Disk:" in the sort step indicates the need for more:
Configuration parameter work_mem in PostgreSQL on Linux
Optimize simple query using ORDER BY date and text
For many rows per customer (low cardinality in column customer), an "index skip scan" or "loose index scan" would be (much) more efficient. But that's not implemented up to Postgres 15. Serious work to implement it one way or another has been ongoing for years now, but so far unsuccessful. See here and here.
For now, there are faster query techniques to substitute for this. In particular if you have a separate table holding unique customers, which is the typical use case. But also if you don't:
SELECT DISTINCT is slower than expected on my table in PostgreSQL
Optimize GROUP BY query to retrieve latest row per user
Optimize groupwise maximum query
Query last N related rows per row
Benchmarks
See separate answer.
On databases that support CTE and windowing functions:
WITH summary AS (
SELECT p.id,
p.customer,
p.total,
ROW_NUMBER() OVER(PARTITION BY p.customer
ORDER BY p.total DESC) AS rank
FROM PURCHASES p)
SELECT *
FROM summary
WHERE rank = 1
Supported by any database:
But you need to add logic to break ties:
SELECT MIN(x.id), -- change to MAX if you want the highest
x.customer,
x.total
FROM PURCHASES x
JOIN (SELECT p.customer,
MAX(total) AS max_total
FROM PURCHASES p
GROUP BY p.customer) y ON y.customer = x.customer
AND y.max_total = x.total
GROUP BY x.customer, x.total
Benchmarks
I tested the most interesting candidates:
Initially with Postgres 9.4 and 9.5.
Added accented tests for Postgres 13 later.
Basic test setup
Main table: purchases:
CREATE TABLE purchases (
id serial -- PK constraint added below
, customer_id int -- REFERENCES customer
, total int -- could be amount of money in Cent
, some_column text -- to make the row bigger, more realistic
);
Dummy data (with some dead tuples), PK, index:
INSERT INTO purchases (customer_id, total, some_column) -- 200k rows
SELECT (random() * 10000)::int AS customer_id -- 10k distinct customers
, (random() * random() * 100000)::int AS total
, 'note: ' || repeat('x', (random()^2 * random() * random() * 500)::int)
FROM generate_series(1,200000) g;
ALTER TABLE purchases ADD CONSTRAINT purchases_id_pkey PRIMARY KEY (id);
DELETE FROM purchases WHERE random() > 0.9; -- some dead rows
INSERT INTO purchases (customer_id, total, some_column)
SELECT (random() * 10000)::int AS customer_id -- 10k customers
, (random() * random() * 100000)::int AS total
, 'note: ' || repeat('x', (random()^2 * random() * random() * 500)::int)
FROM generate_series(1,20000) g; -- add 20k to make it ~ 200k
CREATE INDEX purchases_3c_idx ON purchases (customer_id, total DESC, id);
VACUUM ANALYZE purchases;
customer table - used for optimized query:
CREATE TABLE customer AS
SELECT customer_id, 'customer_' || customer_id AS customer
FROM purchases
GROUP BY 1
ORDER BY 1;
ALTER TABLE customer ADD CONSTRAINT customer_customer_id_pkey PRIMARY KEY (customer_id);
VACUUM ANALYZE customer;
In my second test for 9.5 I used the same setup, but with 100000 distinct customer_id to get few rows per customer_id.
Object sizes for table purchases
Basic setup: 200k rows in purchases, 10k distinct customer_id, avg. 20 rows per customer.
For Postgres 9.5 I added a 2nd test with 86446 distinct customers - avg. 2.3 rows per customer.
Generated with a query taken from here:
Measure the size of a PostgreSQL table row
Gathered for Postgres 9.5:
what | bytes/ct | bytes_pretty | bytes_per_row
-----------------------------------+----------+--------------+---------------
core_relation_size | 20496384 | 20 MB | 102
visibility_map | 0 | 0 bytes | 0
free_space_map | 24576 | 24 kB | 0
table_size_incl_toast | 20529152 | 20 MB | 102
indexes_size | 10977280 | 10 MB | 54
total_size_incl_toast_and_indexes | 31506432 | 30 MB | 157
live_rows_in_text_representation | 13729802 | 13 MB | 68
------------------------------ | | |
row_count | 200045 | |
live_tuples | 200045 | |
dead_tuples | 19955 | |
Queries
1. row_number() in CTE, (see other answer)
WITH cte AS (
SELECT id, customer_id, total
, row_number() OVER (PARTITION BY customer_id ORDER BY total DESC) AS rn
FROM purchases
)
SELECT id, customer_id, total
FROM cte
WHERE rn = 1;
2. row_number() in subquery (my optimization)
SELECT id, customer_id, total
FROM (
SELECT id, customer_id, total
, row_number() OVER (PARTITION BY customer_id ORDER BY total DESC) AS rn
FROM purchases
) sub
WHERE rn = 1;
3. DISTINCT ON (see other answer)
SELECT DISTINCT ON (customer_id)
id, customer_id, total
FROM purchases
ORDER BY customer_id, total DESC, id;
4. rCTE with LATERAL subquery (see here)
WITH RECURSIVE cte AS (
( -- parentheses required
SELECT id, customer_id, total
FROM purchases
ORDER BY customer_id, total DESC
LIMIT 1
)
UNION ALL
SELECT u.*
FROM cte c
, LATERAL (
SELECT id, customer_id, total
FROM purchases
WHERE customer_id > c.customer_id -- lateral reference
ORDER BY customer_id, total DESC
LIMIT 1
) u
)
SELECT id, customer_id, total
FROM cte
ORDER BY customer_id;
5. customer table with LATERAL (see here)
SELECT l.*
FROM customer c
, LATERAL (
SELECT id, customer_id, total
FROM purchases
WHERE customer_id = c.customer_id -- lateral reference
ORDER BY total DESC
LIMIT 1
) l;
6. array_agg() with ORDER BY (see other answer)
SELECT (array_agg(id ORDER BY total DESC))[1] AS id
, customer_id
, max(total) AS total
FROM purchases
GROUP BY customer_id;
Results
Execution time for above queries with EXPLAIN (ANALYZE, TIMING OFF, COSTS OFF, best of 5 runs to compare with warm cache.
All queries used an Index Only Scan on purchases2_3c_idx (among other steps). Some only to benefit from the smaller size of the index, others more effectively.
A. Postgres 9.4 with 200k rows and ~ 20 per customer_id
1. 273.274 ms
2. 194.572 ms
3. 111.067 ms
4. 92.922 ms -- !
5. 37.679 ms -- winner
6. 189.495 ms
B. Same as A. with Postgres 9.5
1. 288.006 ms
2. 223.032 ms
3. 107.074 ms
4. 78.032 ms -- !
5. 33.944 ms -- winner
6. 211.540 ms
C. Same as B., but with ~ 2.3 rows per customer_id
1. 381.573 ms
2. 311.976 ms
3. 124.074 ms -- winner
4. 710.631 ms
5. 311.976 ms
6. 421.679 ms
Retest with Postgres 13 on 2021-08-11
Simplified test setup: no deleted rows, because VACUUM ANALYZE cleans the table completely for the simple case.
Important changes for Postgres:
General performance improvements.
CTEs can be inlined since Postgres 12, so query 1. and 2. now perform mostly identical (same query plan).
D. Like B. ~ 20 rows per customer_id
1. 103 ms
2. 103 ms
3. 23 ms -- winner
4. 71 ms
5. 22 ms -- winner
6. 81 ms
db<>fiddle here
E. Like C. ~ 2.3 rows per customer_id
1. 127 ms
2. 126 ms
3. 36 ms -- winner
4. 620 ms
5. 145 ms
6. 203 ms
db<>fiddle here
Accented tests with Postgres 13
1M rows, 10.000 vs. 100 vs. 1.6 rows per customer.
F. with ~ 10.000 rows per customer
1. 526 ms
2. 527 ms
3. 127 ms
4. 2 ms -- winner !
5. 1 ms -- winner !
6. 356 ms
db<>fiddle here
G. with ~ 100 rows per customer
1. 535 ms
2. 529 ms
3. 132 ms
4. 108 ms -- !
5. 71 ms -- winner
6. 376 ms
db<>fiddle here
H. with ~ 1.6 rows per customer
1. 691 ms
2. 684 ms
3. 234 ms -- winner
4. 4669 ms
5. 1089 ms
6. 1264 ms
db<>fiddle here
Conclusions
DISTINCT ON uses the index effectively and typically performs best for few rows per group. And it performs decently even with many rows per group.
For many rows per group, emulating an index skip scan with an rCTE performs best - second only to the query technique with a separate lookup table (if that's available).
The row_number() technique demonstrated in the currently accepted answer never wins any performance test. Not then, not now. It never comes even close to DISTINCT ON, not even when the data distribution is unfavorable for the latter. The only good thing about row_number(): it does not scale terribly, just mediocre.
More benchmarks
Benchmark by "ogr" with 10M rows and 60k unique "customers" on Postgres 11.5. Results are in line with what we have seen so far:
Proper way to access latest row for each individual identifier?
Original (outdated) benchmark from 2011
I ran three tests with PostgreSQL 9.1 on a real life table of 65579 rows and single-column btree indexes on each of the three columns involved and took the best execution time of 5 runs.
Comparing #OMGPonies' first query (A) to the above DISTINCT ON solution (B):
Select the whole table, results in 5958 rows in this case.
A: 567.218 ms
B: 386.673 ms
Use condition WHERE customer BETWEEN x AND y resulting in 1000 rows.
A: 249.136 ms
B: 55.111 ms
Select a single customer with WHERE customer = x.
A: 0.143 ms
B: 0.072 ms
Same test repeated with the index described in the other answer:
CREATE INDEX purchases_3c_idx ON purchases (customer, total DESC, id);
1A: 277.953 ms
1B: 193.547 ms
2A: 249.796 ms -- special index not used
2B: 28.679 ms
3A: 0.120 ms
3B: 0.048 ms
This is common greatest-n-per-group problem, which already has well tested and highly optimized solutions. Personally I prefer the left join solution by Bill Karwin (the original post with lots of other solutions).
Note that bunch of solutions to this common problem can surprisingly be found in the MySQL manual -- even though your problem is in Postgres, not MySQL, the solutions given should work with most SQL variants. See Examples of Common Queries :: The Rows Holding the Group-wise Maximum of a Certain Column.
In Postgres you can use array_agg like this:
SELECT customer,
(array_agg(id ORDER BY total DESC))[1],
max(total)
FROM purchases
GROUP BY customer
This will give you the id of each customer's largest purchase.
Some things to note:
array_agg is an aggregate function, so it works with GROUP BY.
array_agg lets you specify an ordering scoped to just itself, so it doesn't constrain the structure of the whole query. There is also syntax for how you sort NULLs, if you need to do something different from the default.
Once we build the array, we take the first element. (Postgres arrays are 1-indexed, not 0-indexed).
You could use array_agg in a similar way for your third output column, but max(total) is simpler.
Unlike DISTINCT ON, using array_agg lets you keep your GROUP BY, in case you want that for other reasons.
The Query:
SELECT purchases.*
FROM purchases
LEFT JOIN purchases as p
ON
p.customer = purchases.customer
AND
purchases.total < p.total
WHERE p.total IS NULL
HOW DOES THAT WORK! (I've been there)
We want to make sure that we only have the highest total for each purchase.
Some Theoretical Stuff (skip this part if you only want to understand the query)
Let Total be a function T(customer,id) where it returns a value given the name and id
To prove that the given total (T(customer,id)) is the highest we have to prove that
We want to prove either
∀x T(customer,id) > T(customer,x) (this total is higher than all other
total for that customer)
OR
¬∃x T(customer, id) < T(customer, x) (there exists no higher total for
that customer)
The first approach will need us to get all the records for that name which I do not really like.
The second one will need a smart way to say there can be no record higher than this one.
Back to SQL
If we left joins the table on the name and total being less than the joined table:
LEFT JOIN purchases as p
ON
p.customer = purchases.customer
AND
purchases.total < p.total
we make sure that all records that have another record with the higher total for the same user to be joined:
+--------------+---------------------+-----------------+------+------------+---------+
| purchases.id | purchases.customer | purchases.total | p.id | p.customer | p.total |
+--------------+---------------------+-----------------+------+------------+---------+
| 1 | Tom | 200 | 2 | Tom | 300 |
| 2 | Tom | 300 | | | |
| 3 | Bob | 400 | 4 | Bob | 500 |
| 4 | Bob | 500 | | | |
| 5 | Alice | 600 | 6 | Alice | 700 |
| 6 | Alice | 700 | | | |
+--------------+---------------------+-----------------+------+------------+---------+
That will help us filter for the highest total for each purchase with no grouping needed:
WHERE p.total IS NULL
+--------------+----------------+-----------------+------+--------+---------+
| purchases.id | purchases.name | purchases.total | p.id | p.name | p.total |
+--------------+----------------+-----------------+------+--------+---------+
| 2 | Tom | 300 | | | |
| 4 | Bob | 500 | | | |
| 6 | Alice | 700 | | | |
+--------------+----------------+-----------------+------+--------+---------+
And that's the answer we need.
The solution is not very efficient as pointed by Erwin, because of presence of SubQs
select * from purchases p1 where total in
(select max(total) from purchases where p1.customer=customer) order by total desc;
I use this way (postgresql only): https://wiki.postgresql.org/wiki/First/last_%28aggregate%29
-- Create a function that always returns the first non-NULL item
CREATE OR REPLACE FUNCTION public.first_agg ( anyelement, anyelement )
RETURNS anyelement LANGUAGE sql IMMUTABLE STRICT AS $$
SELECT $1;
$$;
-- And then wrap an aggregate around it
CREATE AGGREGATE public.first (
sfunc = public.first_agg,
basetype = anyelement,
stype = anyelement
);
-- Create a function that always returns the last non-NULL item
CREATE OR REPLACE FUNCTION public.last_agg ( anyelement, anyelement )
RETURNS anyelement LANGUAGE sql IMMUTABLE STRICT AS $$
SELECT $2;
$$;
-- And then wrap an aggregate around it
CREATE AGGREGATE public.last (
sfunc = public.last_agg,
basetype = anyelement,
stype = anyelement
);
Then your example should work almost as is:
SELECT FIRST(id), customer, FIRST(total)
FROM purchases
GROUP BY customer
ORDER BY FIRST(total) DESC;
CAVEAT: It ignore's NULL rows
Edit 1 - Use the postgres extension instead
Now I use this way: http://pgxn.org/dist/first_last_agg/
To install on ubuntu 14.04:
apt-get install postgresql-server-dev-9.3 git build-essential -y
git clone git://github.com/wulczer/first_last_agg.git
cd first_last_app
make && sudo make install
psql -c 'create extension first_last_agg'
It's a postgres extension that gives you first and last functions; apparently faster than the above way.
Edit 2 - Ordering and filtering
If you use aggregate functions (like these), you can order the results, without the need to have the data already ordered:
http://www.postgresql.org/docs/current/static/sql-expressions.html#SYNTAX-AGGREGATES
So the equivalent example, with ordering would be something like:
SELECT first(id order by id), customer, first(total order by id)
FROM purchases
GROUP BY customer
ORDER BY first(total);
Of course you can order and filter as you deem fit within the aggregate; it's very powerful syntax.
Use ARRAY_AGG function for PostgreSQL, U-SQL, IBM DB2, and Google BigQuery SQL:
SELECT customer, (ARRAY_AGG(id ORDER BY total DESC))[1], MAX(total)
FROM purchases
GROUP BY customer
In SQL Server you can do this:
SELECT *
FROM (
SELECT ROW_NUMBER()
OVER(PARTITION BY customer
ORDER BY total DESC) AS StRank, *
FROM Purchases) n
WHERE StRank = 1
Explaination:Here Group by is done on the basis of customer and then order it by total then each such group is given serial number as StRank and we are taking out first 1 customer whose StRank is 1
Very fast solution
SELECT a.*
FROM
purchases a
JOIN (
SELECT customer, min( id ) as id
FROM purchases
GROUP BY customer
) b USING ( id );
and really very fast if table is indexed by id:
create index purchases_id on purchases (id);
Snowflake/Teradata supports QUALIFY clause which works like HAVING for windowed functions:
SELECT id, customer, total
FROM PURCHASES
QUALIFY ROW_NUMBER() OVER(PARTITION BY p.customer ORDER BY p.total DESC) = 1
In PostgreSQL, another possibility is to use the first_value window function in combination with SELECT DISTINCT:
select distinct customer_id,
first_value(row(id, total)) over(partition by customer_id order by total desc, id)
from purchases;
I created a composite (id, total), so both values are returned by the same aggregate. You can of course always apply first_value() twice.
This way it work for me:
SELECT article, dealer, price
FROM shop s1
WHERE price=(SELECT MAX(s2.price)
FROM shop s2
WHERE s1.article = s2.article
GROUP BY s2.article)
ORDER BY article;
Select highest price on each article
This is how we can achieve this by using windows function:
create table purchases (id int4, customer varchar(10), total integer);
insert into purchases values (1, 'Joe', 5);
insert into purchases values (2, 'Sally', 3);
insert into purchases values (3, 'Joe', 2);
insert into purchases values (4, 'Sally', 1);
select ID, CUSTOMER, TOTAL from (
select ID, CUSTOMER, TOTAL,
row_number () over (partition by CUSTOMER order by TOTAL desc) RN
from purchases) A where RN = 1;
The accepted OMG Ponies' "Supported by any database" solution has good speed from my test.
Here I provide a same-approach, but more complete and clean any-database solution. Ties are considered (assume desire to get only one row for each customer, even multiple records for max total per customer), and other purchase fields (e.g. purchase_payment_id) will be selected for the real matching rows in the purchase table.
Supported by any database:
select * from purchase
join (
select min(id) as id from purchase
join (
select customer, max(total) as total from purchase
group by customer
) t1 using (customer, total)
group by customer
) t2 using (id)
order by customer
This query is reasonably fast especially when there is a composite index like (customer, total) on the purchase table.
Remark:
t1, t2 are subquery alias which could be removed depending on database.
Caveat: the using (...) clause is currently not supported in MS-SQL and Oracle db as of this edit on Jan 2017. You have to expand it yourself to e.g. on t2.id = purchase.id etc. The USING syntax works in SQLite, MySQL and PostgreSQL.
If you want to select any (by your some specific condition) row from the set of aggregated rows.
If you want to use another (sum/avg) aggregation function in addition to max/min. Thus you can not use clue with DISTINCT ON
You can use next subquery:
SELECT
(
SELECT **id** FROM t2
WHERE id = ANY ( ARRAY_AGG( tf.id ) ) AND amount = MAX( tf.amount )
) id,
name,
MAX(amount) ma,
SUM( ratio )
FROM t2 tf
GROUP BY name
You can replace amount = MAX( tf.amount ) with any condition you want with one restriction: This subquery must not return more than one row
But if you wanna to do such things you probably looking for window functions
For SQl Server the most efficient way is:
with
ids as ( --condition for split table into groups
select i from (values (9),(12),(17),(18),(19),(20),(22),(21),(23),(10)) as v(i)
)
,src as (
select * from yourTable where <condition> --use this as filter for other conditions
)
,joined as (
select tops.* from ids
cross apply --it`s like for each rows
(
select top(1) *
from src
where CommodityId = ids.i
) as tops
)
select * from joined
and don't forget to create clustered index for used columns
This can be achieved easily by MAX FUNCTION on total and GROUP BY id and customer.
SELECT id, customer, MAX(total) FROM purchases GROUP BY id, customer
ORDER BY total DESC;
My approach via window function dbfiddle:
Assign row_number at each group: row_number() over (partition by agreement_id, order_id ) as nrow
Take only first row at group: filter (where nrow = 1)
with intermediate as (select
*,
row_number() over ( partition by agreement_id, order_id ) as nrow,
(sum( suma ) over ( partition by agreement_id, order_id ))::numeric( 10, 2) as order_suma,
from <your table>)
select
*,
sum( order_suma ) filter (where nrow = 1) over (partition by agreement_id)
from intermediate

SQL Performance on selecting first/last row for each user on bigger data table

I have read through quite a few posts with greatest-n-per-group but still don't seem to find a good solution in terms of performance. I'm running 10.1.43-MariaDB.
I'm trying to get the change in data values in given time frame and so I need to get the earliest and latest row from this period. The largest number of rows in a time frame that needs to be calculated right now is around 700k and it's only going to be growing. For now I have just resulted into doing two queries, one for the latest and one for the earliest date, but even this has slow performance on currently. The table looks like this:
user_id data date
4567 109 28/06/2019 11:04:45
4252 309 18/06/2019 11:04:45
4567 77 18/02/2019 11:04:45
7893 1123 22/06/2019 11:04:45
4252 303 11/06/2019 11:04:45
4252 317 19/06/2019 11:04:45
The date and user_id columns are indexed. Without ordering the rows aren't in any particular order in the database if that makes a difference.
The furthest I have gotten with this issue is query like this for year period currently (700k datapoints):
SELECT user_id,
MIN(date) as date, data
FROM datapoint_table
WHERE date >= '2019-01-14'
GROUP BY user_id
This gives me the right date and user_id in around very fast in around ~0.05s. But like the common issue with the greatest-n-per-group is, the rest of the row (data in this case) is not from the same row with date. I have read about other similar questions and tried with subquery like this:
SELECT a.user_id, a.date, a.data
FROM datapoint_table a
INNER JOIN (
SELECT datapoint_table.user_id,
MIN(date) as date, data
FROM datapoint_table
WHERE date >= '2019-01-01'
GROUP BY user_id
) b ON a.user_id = b.user_id AND a.date = b.date
This query takes around 15s to complete and gets the correct data value. The 15s tho is just way too long and I must be doing something wrong when the first query is so fast. I also tried doing (MAX)-(MIN) for the data with group by for user_id but it also had slow performance.
What would be more efficient way of getting the same data value as the date or even the difference in latest and earliest data for each user?
Assuming you are using a fairly recent version of either MariaDB or MySQL, then ROW_NUMBER would probably be the most efficient way to find the earliest record for each user:
WITH cte AS (
SELECT *, ROW_NUMBER() OVER (PARTITION BY user_id ORDER BY date) rn
FROM datapoint_table
WHERE date > '2019-01-14'
)
SELECT user_id, data, date
FROM cte
WHERE rn = 1;
To the above you could also consider adding the following index:
CREATE INDEX ON datapoint_table (user_id, date);
You could also try the following variant index with the columns reversed:
CREATE INDEX ON datapoint_table (date, user_id);
It is not clear which version of the index would perform the best, which would depend on your data and the execution plan. Ideally one of the above two indices would help the database execute ROW_NUMBER, along with the WHERE clause.
If your database version does not support ROW_NUMBER, then you may continue with your current approach:
SELECT d1.user_id, d1.data, d1.date
FROM datapoint_table d1
INNER JOIN
(
SELECT user_id, MIN(date) AS min_date
FROM datapoint_table
WHERE date > '2019-01-14'
GROUP BY user_id
) d2
ON d1.user_id = d2.user AND d1.date = d2.min_date
WHERE
d1.date > '2019-01-14';
Again, the indices suggested should at least speed up the execution of the GROUP BY subquery.

MySQL query for each subgroup

I have a table like this:
Year Month Code
1850 January 5210
1850 February 3524
1851 January 6752
1851 January 9877
1851 February 3698
I want to delete repeated months within a year (e.g. 1851 January). I don´t mind loosing one code (6752 or 9877). I thought of using:
Select * from table1 group by Month
But I need to group for each year. Otherwise I will select only one January from the three in the table, and I need to select two of them (one in 1850 and one in 1851).
Of course my table is huge and I cannot do it manually. Thanks.
If you want to have only the entries with count>1 then you can do this:
Select year, month, code, count(1) as cnt from table1 group by year, month having cnt>1;
If the table is huge, make sure that both year and month are indexes, otherwise you'll spend lot of time waiting for results.
http://sqlfiddle.com/#!2/eb325/3
UPDATE: for the case where there are more than 2 rows (and actually in general, if you don't care about the lost "code" entries), it might make sense to select one entry from each year-month into a new table (which will leave you with unique year-month combinations) and then discard the old table, like that:
CREATE TABLE table1_temp SELECT year, month, MIN(code) as code FROM table1 GROUP BY year, month;
DROP TABLE table1;
RENAME TABLE table1_temp TO table1;
http://sqlfiddle.com/#!2/113954/1
Query suggested by #Ashalynd will work if you have only 2 duplicate rows but it will not work if you have 3 rows for year 1951 and month January ...Below query will take care it. You can remove all rows getting from below query.
SELECT
DISTINCT b.CODE
FROM
(SELECT YEAR, MONTH, CODE, COUNT(1) AS cnt FROM table1 GROUP BY YEAR, MONTH HAVING cnt>1) a,
table1 b
WHERE a.year=b.year AND a.month=b.month AND a.code<>b.code;
Another approach can be AS per below-
CREATE TABLE table1_new LIKE table1;ALTER TABLE table1_new ADD UNIQUE KEY (YEAR,MONTH);INSERT IGNORE INTO table1_new SELECT * FROM table1;
TRUNCATE TABLE table1;INSERT INTO table1 SELECT * FROM table1_new;
DROP TABLE table1_new;
Note: IF you want TO keep your VALUES UNIQUE based ON these FIELDS THEN CREATE UNIQUE INDEX ON your main table.
try this:
Select *, count(month) as cnt from table1 group by Year, Month;
You will get all the months under the different years and either one of the repeating months eliminated.
this

Need help optimizing 4 heavy queries on one webpage

I have four queries that run on one web page. I use them for statistics and they are taking too long to load.
Here are my current configurations
use the text wrapping button on pastebin to make it easier to read.
I have a lot of RAM dedicated to mysql but it still takes a long time. I have also index most of the columns.
I'm just trying to see what other options I have.
I put "show create table" and total count(*) in here. I'm going to rename everything and paste in SO. I agree that someone in the future may use it.
QUERY ONE
SELECT SQL_NO_CACHE
DATE_FORMAT(DateActioned,'%M-%Y') as val1,
COUNT(*) AS total_count
FROM
db.statisticsresults
WHERE
DID = 28
AND ActionTypeID = 1
AND DateActioned IS NOT NULL
GROUP BY
DATE_FORMAT(DateActioned, '%m-%y')
ORDER BY
YEAR( DateActioned ) DESC,
MONTH( DateActioned ) DESC
This, I would have a covering index based on your key elements so the engine does not have to go back to the raw data... Based on this and your following queries, I would have THAT column in the primary index position such as
StatisticsResults -- index ( DID, ActionTypeID, DateActioned )
The order by by respective year() descending and month() descending will do the same thing as your hard-coded references to FIND the field in the list.
QUERY TWO
-- 381.812
SELECT SQL_NO_CACHE
DATE_FORMAT(DateActioned,'%M-%Y') as val1,
COUNT(*) AS total_count
FROM
db.statisticsdivision
WHERE
DID = 28
AND ActionTypeID = 9
AND DateActioned IS NOT NULL
GROUP BY
DATE_FORMAT(DateActioned, '%m-%y')
ORDER BY
YEAR( DateActioned ) DESC,
MONTH( DateActioned ) DESC
ON this one, the DID = '28', I changed to DID = 28. If the column is numeric, don't offer confusion to the engine to try and convert one to the other. The same indexes from option 1 would apply here too.
QUERY THREE
-- 33.899
SELECT SQL_NO_CACHE DISTINCT
AID,
COUNT(*) AS acount
FROM
db.statisticsresults
JOIN db.division_id USING(AID)
WHERE
DID = 28
GROUP BY
AID
ORDER BY
count(*) DESC
LIMIT
19
This one looks like a bit of a waste... you are joining to the division table based on an "AID" column in the stats table. Why are you doing the join unless you actually are expecting some invalid "AID" values not in the division table? Again, change your "DID" column to 28 instead of '28'. Ensure your division table has its index on "AID" for the join. The SECOND index from query 1 appears to be your better option
QUERY FOUR
-- 21.403
SELECT SQL_NO_CACHE DISTINCT
TID,
tax,
agent,
COUNT(*) AS t_count
FROM
db.statisticsresults sr
JOIN db.tax_id USING(TID)
JOIN db.agent_id ai ON(ai.AID = sr.AID)
WHERE
DID = 28
GROUP BY
TID,
sr.AID
ORDER BY
COUNT(*) DESC
LIMIT 19
Again, "DID" column from '28' to 28
FOR your TAX_ID table, have a covering index on that too so it can handle the join
TO the agent table without going TO the raw page data
Tax_ID -- index ( tid, aid )
Finally, if you are dealing with your original list finding things only from Jan 2012 to Dec 2013, you can simplify querying the ENTIRE table of stats by adding to your WHERE clause...
AND DateActioned >= '2012-01-01'
So you completely skip over anything prior to 2012 (old data I presume?)

How can I write a query that aggregate a single row with latest date among multiple set of rows?

I have a MySQL table where there are many rows for each person, and I want to write a query which aggregates rows with special constraint. (one per person)
For example, lets say the table is consist of following data.
name date reason
---------------------------------------
John 2013-04-01 14:00:00 Vacation
John 2013-03-31 18:00:00 Sick
Ted 2012-05-06 20:00:00 Sick
Ted 2012-02-20 01:00:00 Vacation
John 2011-12-21 00:00:00 Sick
Bob 2011-04-02 20:00:00 Sick
I want to see the distribution of 'reason' column. If I just write a query like below
select reason, count(*) as count from table group by reason
then I will be able to see number of reasons for this table overall.
reason count
------------------
Sick 4
Vacation 2
However, I am only interested in single reason from each person. The reason that should be counted should be from a row with latest date from the person's records. For example, John's latest reason would be Vacation while Ted's latest reason would be Sick. And Bob's latest reason (and the only reason) is Sick.
The expected result for that query should be like below. (Sum of count will be 3 because there are only 3 people)
reason count
-----------------
Sick 2
Vacation 1
Is it possible to write a query such that single latest reason will be counted when I want to see distribution(count) of reasons?
Here are some facts about the table.
The table has tens of millions of rows
For most of times, each person has one reason.
Some people have multiple reasons, but 99.99% of people have fewer than 5 reasons.
There are about 30 different reasons while there are millions of distinct names.
The table is partitioned based on date range.
SELECT T.REASON, COUNT(*)
FROM
(
SELECT PERSON, MAX(DATE) AS MAX_DATE
FROM TABLE-NAME
GROUP BY PERSON
) A, TABLE-NAME T
WHERE T.PERSON = A.PERSON AND T.DATE = A.MAX_DATE
GROUP BY T.REASON
Try this
select reason, count(*) from
(select reason from table where date in
(select max(date) from table group by name)) t
group by reason
In MySQL, it's not very efficient to do this kind of query since you don't have access to tools like partitionning query in SQL Server or Oracle.
You can still emulate it by doing a subquery and retrieve the rows based on the condition you need, here the maximum date :
SELECT t.reason, COUNT(1)
FROM
(
SELECT name, MAX(adate) AS maxDate
FROM #aTable
GROUP BY name
) maxDateRows
INNER JOIN #aTable t ON maxDateRows.name = t.name
AND maxDateRows.maxDate = t.adate
GROUP BY t.reason
You can see a sample here.
Test this query on your samples, but I'm afraid that it will be slow as hell.
For your information, you can do the same thing in a more elegant and much much faster way in SQL Server :
SELECT reason, COUNT(1)
FROM
(
SELECT name
, reason
, RANK() OVER(PARTITION BY name ORDER BY adate DESC) as Rank
FROM #aTable
) AS rankTable
WHERE Rank = 1
GROUP BY reason
The sample is here
If you are really stuck to MySql, and the first query is too slow, then you can split the problem.
Do a first query creating a table:
CREATE TABLE maxDateRows AS
SELECT name, MAX(adate) AS maxDate
FROM #aTable
GROUP BY name
Then create index on both name and maxDate.
Finally, get the results :
SELECT t.reason, COUNT(1)
FROM maxDateRows m
INNER JOIN #aTable t ON m.name = t.name
AND m.maxDate = t.adate
GROUP BY t.reason
The solution you are looking for seems to be solved by this query :
select
reason,
count(*)
from (select * from tablename group by name) abc
group by
reason
It is quite fast and simple. You can view the SQL Fiddle
Apologies if this answer duplicates an existing. Maybe I'm suffering from some form aphasia but I cannot see it...
SELECT x.reason
, COUNT(*)
FROM absentism x
JOIN
( SELECT name,MAX(date) max_date FROM absentism GROUP BY name) y
ON y.name = x.name
AND y.max_date = x.date
GROUP
BY reason;