Join Left or WHERE solution - Most efficient? - mysql

I am learning about databases at college, and have the assignment about finding the minimum avg exam grade for a college course. I have made two solutions, but I hope you experts in here can help me with:
What is the best/most effective solution?
Solution 1:
SELECT courses.name , MIN(avg_grade)
FROM (SELECT courseCode, AVG(grade) as avg_grade
FROM exams
GROUP BY courseCode) avg_grades, courses
WHERE courses.code = avg_grades.courseCode
Solution 2:
SELECT name, min(avg_grade)
FROM (SELECT courses.name, AVG(grade) as avg_grade
FROM courses
LEFT JOIN exams on exams.courseCode = courses.code
GROUP BY courseCode) mytable
And I have been thinking about if JOIN or LEFT JOIN is the correct to use here?

Your two queries are different, so you can't really compare efficiency, your second query will return records for courses with no exam results.
Assuming that you switch the LEFT JOIN to an INNER to make the queries comparable, then I would expect the first query to be slightly more efficient since it only has one
derived table, and the second has two:
Solution 1:
ID SELECT_TYPE TABLE TYPE POSSIBLE_KEYS KEY KEY_LEN REF ROWS FILTERED EXTRA
1 PRIMARY ALL 5 100
1 PRIMARY courses ALL 5 100 Using where; Using join buffer
2 DERIVED exams ALL 5 100 Using temporary; Using filesort
Solution 2:
ID SELECT_TYPE TABLE TYPE POSSIBLE_KEYS KEY KEY_LEN REF ROWS FILTERED EXTRA
1 PRIMARY ALL 5 100
2 DERIVED courses ALL 5 100 Using temporary; Using filesort
2 DERIVED exams ALL 5 100 Using where; Using join buffer
I would however check this against your own execution plans as mine was just a quick example on SQL Fiddle.
I would like to take this chance to advise against using the ANSI-89 implicit join syntax, it was replaced over 20 years ago by the explicit join syntax in the ANSI-92 standard. Aaron Bertrand has written a great article on why to switch, I won't duplicate it here.
Another, much more important point though is that your queries are not deterministic, that is to say you could run the same query twice and get 2 different results even with no underlying change in the data.
Taking your second query as an example (although you will notice both queries are wrong on the SQL-Fiddle), you have a subquery MyTable like so:
SELECT courses.name, AVG(grade) as avg_grade
FROM courses
LEFT JOIN exams on exams.courseCode = courses.code
GROUP BY courseCode
This returned a table like so:
Name | avg_grade
--------+--------------
A | 10
B | 5
C | 6
D | 7
E | 2
You may expect the query as a whole to return:
Name | avg_grade
--------+--------------
E | 2
Since 2 is the lowest average grade, and E is the name that corresponds with that. You would be wrong though, as demonstrated here you can see this actually returns:
Name | avg_grade
--------+--------------
A | 2
What is essentially happening is that MySQL is calculating the minimum avg_grade correctly, but since you have not added any columns to the group by you have given MySQL Carte blanche to choose any value for Name it chooses.
To get the output you want, I think you need:
SELECT courses.name , MIN(avg_grade)
FROM ( SELECT courseCode, AVG(grade) as avg_grade
FROM exams
GROUP BY courseCode
) avg_grades
INNER JOIN courses
ON courses.code = avg_grades.courseCode
GROUP BY courses.Name;
Or if you only want to the course with the lowest average grade then use:
SELECT courseCode, AVG(grade) as avg_grade
FROM exams
GROUP BY courseCode
ORDER BY avg_grade
LIMIT 1;
Examples on SQL Fiddle
Please excuse the laziness of what I am about to do, but I have explained this problem a lot before, and now have a standard response that I post to explain the issue of MySQL grouping. It goes into more detail than the above, and hopefully explains it further.
MySQL Implicit Grouping
I would advise to avoid the implicit grouping offered by MySQL where possible, by this i mean including columns in the select list, even though they are not contained in an aggregate function or the group by clause.
Imagine the following simple table (T):
ID | Column1 | Column2 |
----|---------+----------|
1 | A | X |
2 | A | Y |
In MySQL you can write
SELECT ID, Column1, Column2
FROM T
GROUP BY Column1;
This actually breaks the SQL Standard, but it works in MySQL, however the trouble is it is non-deterministic, the result:
ID | Column1 | Column2 |
----|---------+----------|
1 | A | X |
Is no more or less correct than
ID | Column1 | Column2 |
----|---------+----------|
2 | A | Y |
So what you are saying is give me one row for each distinct value of Column1, which both results sets satisfy, so how do you know which one you will get? Well you don't, it seems to be a fairly popular misconception that you can add and ORDER BY clause to influence the results, so for example the following query:
SELECT ID, Column1, Column2
FROM T
GROUP BY Column1
ORDER BY ID DESC;
Would ensure that you get the following result:
ID | Column1 | Column2 |
----|---------+----------|
2 | A | Y |
because of the ORDER BY ID DESC, however this is not true (as demonstrated here).
The MySQL documents state:
The server is free to choose any value from each group, so unless they are the same, the values chosen are indeterminate. Furthermore, the selection of values from each group cannot be influenced by adding an ORDER BY clause.
So even though you have an order by this does not apply until after one row per group has been selected, and this one row is non-deterministic.
The SQL-Standard does allow columns in the select list not contained in the GROUP BY or an aggregate function, however these columns must be functionally dependent on a column in the GROUP BY. For example, ID in the sample table is the PRIMARY KEY, so we know it is unique in the table, so the following query conforms to the SQL standard and would run in MySQL and fail in many DBMS currently (At the time of writing Postgresql is the closest DBMS I know of to correctly implementing the standard):
SELECT ID, Column1, Column2
FROM T
GROUP BY ID;
Since ID is unique for each row, there can only be one value of Column1 for each ID, one value of Column2 there is no ambiguity about what to return for each row.
EDIT
From the SQL-2003-Standard (5WD-02-Foundation-2003-09 - page 346) - http://www.wiscorp.com/sql_2003_standard.zip
If T is a grouped table, then let G be the set of grouping columns of T. In each contained
in , each column reference that references a column of T shall reference some column C that
is functionally dependent on G or shall be contained in an aggregated argument of a
whose aggregation query is QS.

Related

How to get dependent data using sql query [duplicate]

As the title suggests, I'd like to select the first row of each set of rows grouped with a GROUP BY.
Specifically, if I've got a purchases table that looks like this:
SELECT * FROM purchases;
My Output:
id
customer
total
1
Joe
5
2
Sally
3
3
Joe
2
4
Sally
1
I'd like to query for the id of the largest purchase (total) made by each customer. Something like this:
SELECT FIRST(id), customer, FIRST(total)
FROM purchases
GROUP BY customer
ORDER BY total DESC;
Expected Output:
FIRST(id)
customer
FIRST(total)
1
Joe
5
2
Sally
3
DISTINCT ON is typically simplest and fastest for this in PostgreSQL.
(For performance optimization for certain workloads see below.)
SELECT DISTINCT ON (customer)
id, customer, total
FROM purchases
ORDER BY customer, total DESC, id;
Or shorter (if not as clear) with ordinal numbers of output columns:
SELECT DISTINCT ON (2)
id, customer, total
FROM purchases
ORDER BY 2, 3 DESC, 1;
If total can be null, add NULLS LAST:
...
ORDER BY customer, total DESC NULLS LAST, id;
Works either way, but you'll want to match existing indexes
db<>fiddle here
Major points
DISTINCT ON is a PostgreSQL extension of the standard, where only DISTINCT on the whole SELECT list is defined.
List any number of expressions in the DISTINCT ON clause, the combined row value defines duplicates. The manual:
Obviously, two rows are considered distinct if they differ in at least
one column value. Null values are considered equal in this
comparison.
Bold emphasis mine.
DISTINCT ON can be combined with ORDER BY. Leading expressions in ORDER BY must be in the set of expressions in DISTINCT ON, but you can rearrange order among those freely. Example.
You can add additional expressions to ORDER BY to pick a particular row from each group of peers. Or, as the manual puts it:
The DISTINCT ON expression(s) must match the leftmost ORDER BY
expression(s). The ORDER BY clause will normally contain additional
expression(s) that determine the desired precedence of rows within
each DISTINCT ON group.
I added id as last item to break ties:
"Pick the row with the smallest id from each group sharing the highest total."
To order results in a way that disagrees with the sort order determining the first per group, you can nest above query in an outer query with another ORDER BY. Example.
If total can be null, you most probably want the row with the greatest non-null value. Add NULLS LAST like demonstrated. See:
Sort by column ASC, but NULL values first?
The SELECT list is not constrained by expressions in DISTINCT ON or ORDER BY in any way:
You don't have to include any of the expressions in DISTINCT ON or ORDER BY.
You can include any other expression in the SELECT list. This is instrumental for replacing complex subqueries and aggregate / window functions.
I tested with Postgres versions 8.3 – 15. But the feature has been there at least since version 7.1, so basically always.
Index
The perfect index for the above query would be a multi-column index spanning all three columns in matching sequence and with matching sort order:
CREATE INDEX purchases_3c_idx ON purchases (customer, total DESC, id);
May be too specialized. But use it if read performance for the particular query is crucial. If you have DESC NULLS LAST in the query, use the same in the index so that sort order matches and the index is perfectly applicable.
Effectiveness / Performance optimization
Weigh cost and benefit before creating tailored indexes for each query. The potential of above index largely depends on data distribution.
The index is used because it delivers pre-sorted data. In Postgres 9.2 or later the query can also benefit from an index only scan if the index is smaller than the underlying table. The index has to be scanned in its entirety, though. Example.
For few rows per customer (high cardinality in column customer), this is very efficient. Even more so if you need sorted output anyway. The benefit shrinks with a growing number of rows per customer.
Ideally, you have enough work_mem to process the involved sort step in RAM and not spill to disk. But generally setting work_mem too high can have adverse effects. Consider SET LOCAL for exceptionally big queries. Find how much you need with EXPLAIN ANALYZE. Mention of "Disk:" in the sort step indicates the need for more:
Configuration parameter work_mem in PostgreSQL on Linux
Optimize simple query using ORDER BY date and text
For many rows per customer (low cardinality in column customer), an "index skip scan" or "loose index scan" would be (much) more efficient. But that's not implemented up to Postgres 15. Serious work to implement it one way or another has been ongoing for years now, but so far unsuccessful. See here and here.
For now, there are faster query techniques to substitute for this. In particular if you have a separate table holding unique customers, which is the typical use case. But also if you don't:
SELECT DISTINCT is slower than expected on my table in PostgreSQL
Optimize GROUP BY query to retrieve latest row per user
Optimize groupwise maximum query
Query last N related rows per row
Benchmarks
See separate answer.
On databases that support CTE and windowing functions:
WITH summary AS (
SELECT p.id,
p.customer,
p.total,
ROW_NUMBER() OVER(PARTITION BY p.customer
ORDER BY p.total DESC) AS rank
FROM PURCHASES p)
SELECT *
FROM summary
WHERE rank = 1
Supported by any database:
But you need to add logic to break ties:
SELECT MIN(x.id), -- change to MAX if you want the highest
x.customer,
x.total
FROM PURCHASES x
JOIN (SELECT p.customer,
MAX(total) AS max_total
FROM PURCHASES p
GROUP BY p.customer) y ON y.customer = x.customer
AND y.max_total = x.total
GROUP BY x.customer, x.total
Benchmarks
I tested the most interesting candidates:
Initially with Postgres 9.4 and 9.5.
Added accented tests for Postgres 13 later.
Basic test setup
Main table: purchases:
CREATE TABLE purchases (
id serial -- PK constraint added below
, customer_id int -- REFERENCES customer
, total int -- could be amount of money in Cent
, some_column text -- to make the row bigger, more realistic
);
Dummy data (with some dead tuples), PK, index:
INSERT INTO purchases (customer_id, total, some_column) -- 200k rows
SELECT (random() * 10000)::int AS customer_id -- 10k distinct customers
, (random() * random() * 100000)::int AS total
, 'note: ' || repeat('x', (random()^2 * random() * random() * 500)::int)
FROM generate_series(1,200000) g;
ALTER TABLE purchases ADD CONSTRAINT purchases_id_pkey PRIMARY KEY (id);
DELETE FROM purchases WHERE random() > 0.9; -- some dead rows
INSERT INTO purchases (customer_id, total, some_column)
SELECT (random() * 10000)::int AS customer_id -- 10k customers
, (random() * random() * 100000)::int AS total
, 'note: ' || repeat('x', (random()^2 * random() * random() * 500)::int)
FROM generate_series(1,20000) g; -- add 20k to make it ~ 200k
CREATE INDEX purchases_3c_idx ON purchases (customer_id, total DESC, id);
VACUUM ANALYZE purchases;
customer table - used for optimized query:
CREATE TABLE customer AS
SELECT customer_id, 'customer_' || customer_id AS customer
FROM purchases
GROUP BY 1
ORDER BY 1;
ALTER TABLE customer ADD CONSTRAINT customer_customer_id_pkey PRIMARY KEY (customer_id);
VACUUM ANALYZE customer;
In my second test for 9.5 I used the same setup, but with 100000 distinct customer_id to get few rows per customer_id.
Object sizes for table purchases
Basic setup: 200k rows in purchases, 10k distinct customer_id, avg. 20 rows per customer.
For Postgres 9.5 I added a 2nd test with 86446 distinct customers - avg. 2.3 rows per customer.
Generated with a query taken from here:
Measure the size of a PostgreSQL table row
Gathered for Postgres 9.5:
what | bytes/ct | bytes_pretty | bytes_per_row
-----------------------------------+----------+--------------+---------------
core_relation_size | 20496384 | 20 MB | 102
visibility_map | 0 | 0 bytes | 0
free_space_map | 24576 | 24 kB | 0
table_size_incl_toast | 20529152 | 20 MB | 102
indexes_size | 10977280 | 10 MB | 54
total_size_incl_toast_and_indexes | 31506432 | 30 MB | 157
live_rows_in_text_representation | 13729802 | 13 MB | 68
------------------------------ | | |
row_count | 200045 | |
live_tuples | 200045 | |
dead_tuples | 19955 | |
Queries
1. row_number() in CTE, (see other answer)
WITH cte AS (
SELECT id, customer_id, total
, row_number() OVER (PARTITION BY customer_id ORDER BY total DESC) AS rn
FROM purchases
)
SELECT id, customer_id, total
FROM cte
WHERE rn = 1;
2. row_number() in subquery (my optimization)
SELECT id, customer_id, total
FROM (
SELECT id, customer_id, total
, row_number() OVER (PARTITION BY customer_id ORDER BY total DESC) AS rn
FROM purchases
) sub
WHERE rn = 1;
3. DISTINCT ON (see other answer)
SELECT DISTINCT ON (customer_id)
id, customer_id, total
FROM purchases
ORDER BY customer_id, total DESC, id;
4. rCTE with LATERAL subquery (see here)
WITH RECURSIVE cte AS (
( -- parentheses required
SELECT id, customer_id, total
FROM purchases
ORDER BY customer_id, total DESC
LIMIT 1
)
UNION ALL
SELECT u.*
FROM cte c
, LATERAL (
SELECT id, customer_id, total
FROM purchases
WHERE customer_id > c.customer_id -- lateral reference
ORDER BY customer_id, total DESC
LIMIT 1
) u
)
SELECT id, customer_id, total
FROM cte
ORDER BY customer_id;
5. customer table with LATERAL (see here)
SELECT l.*
FROM customer c
, LATERAL (
SELECT id, customer_id, total
FROM purchases
WHERE customer_id = c.customer_id -- lateral reference
ORDER BY total DESC
LIMIT 1
) l;
6. array_agg() with ORDER BY (see other answer)
SELECT (array_agg(id ORDER BY total DESC))[1] AS id
, customer_id
, max(total) AS total
FROM purchases
GROUP BY customer_id;
Results
Execution time for above queries with EXPLAIN (ANALYZE, TIMING OFF, COSTS OFF, best of 5 runs to compare with warm cache.
All queries used an Index Only Scan on purchases2_3c_idx (among other steps). Some only to benefit from the smaller size of the index, others more effectively.
A. Postgres 9.4 with 200k rows and ~ 20 per customer_id
1. 273.274 ms
2. 194.572 ms
3. 111.067 ms
4. 92.922 ms -- !
5. 37.679 ms -- winner
6. 189.495 ms
B. Same as A. with Postgres 9.5
1. 288.006 ms
2. 223.032 ms
3. 107.074 ms
4. 78.032 ms -- !
5. 33.944 ms -- winner
6. 211.540 ms
C. Same as B., but with ~ 2.3 rows per customer_id
1. 381.573 ms
2. 311.976 ms
3. 124.074 ms -- winner
4. 710.631 ms
5. 311.976 ms
6. 421.679 ms
Retest with Postgres 13 on 2021-08-11
Simplified test setup: no deleted rows, because VACUUM ANALYZE cleans the table completely for the simple case.
Important changes for Postgres:
General performance improvements.
CTEs can be inlined since Postgres 12, so query 1. and 2. now perform mostly identical (same query plan).
D. Like B. ~ 20 rows per customer_id
1. 103 ms
2. 103 ms
3. 23 ms -- winner
4. 71 ms
5. 22 ms -- winner
6. 81 ms
db<>fiddle here
E. Like C. ~ 2.3 rows per customer_id
1. 127 ms
2. 126 ms
3. 36 ms -- winner
4. 620 ms
5. 145 ms
6. 203 ms
db<>fiddle here
Accented tests with Postgres 13
1M rows, 10.000 vs. 100 vs. 1.6 rows per customer.
F. with ~ 10.000 rows per customer
1. 526 ms
2. 527 ms
3. 127 ms
4. 2 ms -- winner !
5. 1 ms -- winner !
6. 356 ms
db<>fiddle here
G. with ~ 100 rows per customer
1. 535 ms
2. 529 ms
3. 132 ms
4. 108 ms -- !
5. 71 ms -- winner
6. 376 ms
db<>fiddle here
H. with ~ 1.6 rows per customer
1. 691 ms
2. 684 ms
3. 234 ms -- winner
4. 4669 ms
5. 1089 ms
6. 1264 ms
db<>fiddle here
Conclusions
DISTINCT ON uses the index effectively and typically performs best for few rows per group. And it performs decently even with many rows per group.
For many rows per group, emulating an index skip scan with an rCTE performs best - second only to the query technique with a separate lookup table (if that's available).
The row_number() technique demonstrated in the currently accepted answer never wins any performance test. Not then, not now. It never comes even close to DISTINCT ON, not even when the data distribution is unfavorable for the latter. The only good thing about row_number(): it does not scale terribly, just mediocre.
More benchmarks
Benchmark by "ogr" with 10M rows and 60k unique "customers" on Postgres 11.5. Results are in line with what we have seen so far:
Proper way to access latest row for each individual identifier?
Original (outdated) benchmark from 2011
I ran three tests with PostgreSQL 9.1 on a real life table of 65579 rows and single-column btree indexes on each of the three columns involved and took the best execution time of 5 runs.
Comparing #OMGPonies' first query (A) to the above DISTINCT ON solution (B):
Select the whole table, results in 5958 rows in this case.
A: 567.218 ms
B: 386.673 ms
Use condition WHERE customer BETWEEN x AND y resulting in 1000 rows.
A: 249.136 ms
B: 55.111 ms
Select a single customer with WHERE customer = x.
A: 0.143 ms
B: 0.072 ms
Same test repeated with the index described in the other answer:
CREATE INDEX purchases_3c_idx ON purchases (customer, total DESC, id);
1A: 277.953 ms
1B: 193.547 ms
2A: 249.796 ms -- special index not used
2B: 28.679 ms
3A: 0.120 ms
3B: 0.048 ms
This is common greatest-n-per-group problem, which already has well tested and highly optimized solutions. Personally I prefer the left join solution by Bill Karwin (the original post with lots of other solutions).
Note that bunch of solutions to this common problem can surprisingly be found in the MySQL manual -- even though your problem is in Postgres, not MySQL, the solutions given should work with most SQL variants. See Examples of Common Queries :: The Rows Holding the Group-wise Maximum of a Certain Column.
In Postgres you can use array_agg like this:
SELECT customer,
(array_agg(id ORDER BY total DESC))[1],
max(total)
FROM purchases
GROUP BY customer
This will give you the id of each customer's largest purchase.
Some things to note:
array_agg is an aggregate function, so it works with GROUP BY.
array_agg lets you specify an ordering scoped to just itself, so it doesn't constrain the structure of the whole query. There is also syntax for how you sort NULLs, if you need to do something different from the default.
Once we build the array, we take the first element. (Postgres arrays are 1-indexed, not 0-indexed).
You could use array_agg in a similar way for your third output column, but max(total) is simpler.
Unlike DISTINCT ON, using array_agg lets you keep your GROUP BY, in case you want that for other reasons.
The Query:
SELECT purchases.*
FROM purchases
LEFT JOIN purchases as p
ON
p.customer = purchases.customer
AND
purchases.total < p.total
WHERE p.total IS NULL
HOW DOES THAT WORK! (I've been there)
We want to make sure that we only have the highest total for each purchase.
Some Theoretical Stuff (skip this part if you only want to understand the query)
Let Total be a function T(customer,id) where it returns a value given the name and id
To prove that the given total (T(customer,id)) is the highest we have to prove that
We want to prove either
∀x T(customer,id) > T(customer,x) (this total is higher than all other
total for that customer)
OR
¬∃x T(customer, id) < T(customer, x) (there exists no higher total for
that customer)
The first approach will need us to get all the records for that name which I do not really like.
The second one will need a smart way to say there can be no record higher than this one.
Back to SQL
If we left joins the table on the name and total being less than the joined table:
LEFT JOIN purchases as p
ON
p.customer = purchases.customer
AND
purchases.total < p.total
we make sure that all records that have another record with the higher total for the same user to be joined:
+--------------+---------------------+-----------------+------+------------+---------+
| purchases.id | purchases.customer | purchases.total | p.id | p.customer | p.total |
+--------------+---------------------+-----------------+------+------------+---------+
| 1 | Tom | 200 | 2 | Tom | 300 |
| 2 | Tom | 300 | | | |
| 3 | Bob | 400 | 4 | Bob | 500 |
| 4 | Bob | 500 | | | |
| 5 | Alice | 600 | 6 | Alice | 700 |
| 6 | Alice | 700 | | | |
+--------------+---------------------+-----------------+------+------------+---------+
That will help us filter for the highest total for each purchase with no grouping needed:
WHERE p.total IS NULL
+--------------+----------------+-----------------+------+--------+---------+
| purchases.id | purchases.name | purchases.total | p.id | p.name | p.total |
+--------------+----------------+-----------------+------+--------+---------+
| 2 | Tom | 300 | | | |
| 4 | Bob | 500 | | | |
| 6 | Alice | 700 | | | |
+--------------+----------------+-----------------+------+--------+---------+
And that's the answer we need.
The solution is not very efficient as pointed by Erwin, because of presence of SubQs
select * from purchases p1 where total in
(select max(total) from purchases where p1.customer=customer) order by total desc;
I use this way (postgresql only): https://wiki.postgresql.org/wiki/First/last_%28aggregate%29
-- Create a function that always returns the first non-NULL item
CREATE OR REPLACE FUNCTION public.first_agg ( anyelement, anyelement )
RETURNS anyelement LANGUAGE sql IMMUTABLE STRICT AS $$
SELECT $1;
$$;
-- And then wrap an aggregate around it
CREATE AGGREGATE public.first (
sfunc = public.first_agg,
basetype = anyelement,
stype = anyelement
);
-- Create a function that always returns the last non-NULL item
CREATE OR REPLACE FUNCTION public.last_agg ( anyelement, anyelement )
RETURNS anyelement LANGUAGE sql IMMUTABLE STRICT AS $$
SELECT $2;
$$;
-- And then wrap an aggregate around it
CREATE AGGREGATE public.last (
sfunc = public.last_agg,
basetype = anyelement,
stype = anyelement
);
Then your example should work almost as is:
SELECT FIRST(id), customer, FIRST(total)
FROM purchases
GROUP BY customer
ORDER BY FIRST(total) DESC;
CAVEAT: It ignore's NULL rows
Edit 1 - Use the postgres extension instead
Now I use this way: http://pgxn.org/dist/first_last_agg/
To install on ubuntu 14.04:
apt-get install postgresql-server-dev-9.3 git build-essential -y
git clone git://github.com/wulczer/first_last_agg.git
cd first_last_app
make && sudo make install
psql -c 'create extension first_last_agg'
It's a postgres extension that gives you first and last functions; apparently faster than the above way.
Edit 2 - Ordering and filtering
If you use aggregate functions (like these), you can order the results, without the need to have the data already ordered:
http://www.postgresql.org/docs/current/static/sql-expressions.html#SYNTAX-AGGREGATES
So the equivalent example, with ordering would be something like:
SELECT first(id order by id), customer, first(total order by id)
FROM purchases
GROUP BY customer
ORDER BY first(total);
Of course you can order and filter as you deem fit within the aggregate; it's very powerful syntax.
Use ARRAY_AGG function for PostgreSQL, U-SQL, IBM DB2, and Google BigQuery SQL:
SELECT customer, (ARRAY_AGG(id ORDER BY total DESC))[1], MAX(total)
FROM purchases
GROUP BY customer
In SQL Server you can do this:
SELECT *
FROM (
SELECT ROW_NUMBER()
OVER(PARTITION BY customer
ORDER BY total DESC) AS StRank, *
FROM Purchases) n
WHERE StRank = 1
Explaination:Here Group by is done on the basis of customer and then order it by total then each such group is given serial number as StRank and we are taking out first 1 customer whose StRank is 1
Very fast solution
SELECT a.*
FROM
purchases a
JOIN (
SELECT customer, min( id ) as id
FROM purchases
GROUP BY customer
) b USING ( id );
and really very fast if table is indexed by id:
create index purchases_id on purchases (id);
Snowflake/Teradata supports QUALIFY clause which works like HAVING for windowed functions:
SELECT id, customer, total
FROM PURCHASES
QUALIFY ROW_NUMBER() OVER(PARTITION BY p.customer ORDER BY p.total DESC) = 1
In PostgreSQL, another possibility is to use the first_value window function in combination with SELECT DISTINCT:
select distinct customer_id,
first_value(row(id, total)) over(partition by customer_id order by total desc, id)
from purchases;
I created a composite (id, total), so both values are returned by the same aggregate. You can of course always apply first_value() twice.
This way it work for me:
SELECT article, dealer, price
FROM shop s1
WHERE price=(SELECT MAX(s2.price)
FROM shop s2
WHERE s1.article = s2.article
GROUP BY s2.article)
ORDER BY article;
Select highest price on each article
This is how we can achieve this by using windows function:
create table purchases (id int4, customer varchar(10), total integer);
insert into purchases values (1, 'Joe', 5);
insert into purchases values (2, 'Sally', 3);
insert into purchases values (3, 'Joe', 2);
insert into purchases values (4, 'Sally', 1);
select ID, CUSTOMER, TOTAL from (
select ID, CUSTOMER, TOTAL,
row_number () over (partition by CUSTOMER order by TOTAL desc) RN
from purchases) A where RN = 1;
The accepted OMG Ponies' "Supported by any database" solution has good speed from my test.
Here I provide a same-approach, but more complete and clean any-database solution. Ties are considered (assume desire to get only one row for each customer, even multiple records for max total per customer), and other purchase fields (e.g. purchase_payment_id) will be selected for the real matching rows in the purchase table.
Supported by any database:
select * from purchase
join (
select min(id) as id from purchase
join (
select customer, max(total) as total from purchase
group by customer
) t1 using (customer, total)
group by customer
) t2 using (id)
order by customer
This query is reasonably fast especially when there is a composite index like (customer, total) on the purchase table.
Remark:
t1, t2 are subquery alias which could be removed depending on database.
Caveat: the using (...) clause is currently not supported in MS-SQL and Oracle db as of this edit on Jan 2017. You have to expand it yourself to e.g. on t2.id = purchase.id etc. The USING syntax works in SQLite, MySQL and PostgreSQL.
If you want to select any (by your some specific condition) row from the set of aggregated rows.
If you want to use another (sum/avg) aggregation function in addition to max/min. Thus you can not use clue with DISTINCT ON
You can use next subquery:
SELECT
(
SELECT **id** FROM t2
WHERE id = ANY ( ARRAY_AGG( tf.id ) ) AND amount = MAX( tf.amount )
) id,
name,
MAX(amount) ma,
SUM( ratio )
FROM t2 tf
GROUP BY name
You can replace amount = MAX( tf.amount ) with any condition you want with one restriction: This subquery must not return more than one row
But if you wanna to do such things you probably looking for window functions
For SQl Server the most efficient way is:
with
ids as ( --condition for split table into groups
select i from (values (9),(12),(17),(18),(19),(20),(22),(21),(23),(10)) as v(i)
)
,src as (
select * from yourTable where <condition> --use this as filter for other conditions
)
,joined as (
select tops.* from ids
cross apply --it`s like for each rows
(
select top(1) *
from src
where CommodityId = ids.i
) as tops
)
select * from joined
and don't forget to create clustered index for used columns
This can be achieved easily by MAX FUNCTION on total and GROUP BY id and customer.
SELECT id, customer, MAX(total) FROM purchases GROUP BY id, customer
ORDER BY total DESC;
My approach via window function dbfiddle:
Assign row_number at each group: row_number() over (partition by agreement_id, order_id ) as nrow
Take only first row at group: filter (where nrow = 1)
with intermediate as (select
*,
row_number() over ( partition by agreement_id, order_id ) as nrow,
(sum( suma ) over ( partition by agreement_id, order_id ))::numeric( 10, 2) as order_suma,
from <your table>)
select
*,
sum( order_suma ) filter (where nrow = 1) over (partition by agreement_id)
from intermediate

select the first time three unique values appear with sql

From the table below as an example, I need to select all fields from a table where the first 3 columns are the exact same, and take the first time this instance appears. For example, rows 1,3 and 4 should be selected, as they have differing values in the first 3 columns. I have been given this data, and there is no unique ID. There are about 25000 records so handling this once I have SELECT the data in python seems silly therefore the only methods I can think are deleting the records that are nearly identical, or using a SELECT statement I have not worked out yet. Would it be better do try and select the data in small amounts and use python to use the correct bits, as while this is messier, I know how to do it this way?
ID | Class | Season | Grade
---|-------|--------|---------
1 | x | 1 | A
1 | x | 1 | A*
1 | y | 1 | A
1 | x | 2 | C
Try using DISTINCT * it means "select all columns and skip any rows where the values in all columns match some already included row".
So with LIMIT 3 you will have the first 3 unique rows:
SELECT distinct * FROM yourTable LIMIT 3;
You want the first three unique rows. You can actually do this pretty easily if you have an ordering column:
select t.*
from (select t.*,
row_number() over (partition by id, class, season order by <orderingcol>) as seqnum
from t
) t
where seqnum = 1
order by <orderingcol>
limit 3;
Actually, the subquery is not necessary, but the query is a bit more inscrutable without it:
select t.*
from t
where seqnum = 1
order by row_number() over (partition by id, class, season order by <orderingcol>),
<orderingcol>
limit 3;
The one caveat is that this will return duplicates if there are not three unique ones.
Window functions were introduced in MySQL 8+. This could be phrased in earlier versions of MySQL as well:
select t.*
from t join
(select id, class, season, min(<ordering col>) as min_oc
from t
) tt
using (id, class, season)
where t.<ordering col> = tt.min_oc
order by tt.min_oc;

Get the greatest Year value in mysql after grouping by a column

The below table contains an id and a Year and Groups
GroupingTable
id | Year | Groups
1 | 2000 | A
2 | 2001 | B
3 | 2001 | A
Now I want select the greatest year even after grouping them by the Groups Column
SELECT
id,
Year,
Groups
FROM
GroupingTable
GROUP BY
`Groups`
ORDER BY Year DESC
And below is what I am expecting even though the query above doesnt work as expected
id | Year | Groups
2 | 2001 | B
3 | 2001 | A
You need to learn how to use aggregate functions.
SELECT
MAX(Year) AS Year,
Groups
FROM
GroupingTable
GROUP BY
`Groups`
ORDER BY Year DESC
When using GROUP BY, only the column(s) you group by are unambiguous, because they have the same value on every row of the group.
Other columns return a value arbitrarily from one of the rows in the group. Actually, this is behavior of MySQL (and SQLite), but because of the ambiguity, it's an illegal query in standard SQL and all other brands of SQL implementations.
For more on this, see my answer to Reason for Column is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause
Your query misuses the heinously confusing nonstandard extension to GROUP BY that's built in to MySQL. Read this and weep. https://dev.mysql.com/doc/refman/5.7/en/group-by-handling.html
If all you want is the year it's a snap.
SELECT MAX(Year) Year, Groups
FROM GroupingTable
GROUP BY Groups
If you want the id of the row in question, you have to do a bunch of monkey business to retrieve the column id from the above query.
SELECT a.*
FROM GroupingTable a
JOIN (
SELECT MAX(Year) Year, Groups
FROM GroupingTable
GROUP BY Groups
) b ON a.Groups = b.Groups AND a.Year = b.Year
You have to do this because the GROUP BY query yields a summary result set, and you have to join that back to the detail result set to retrieve the ID.

MYSQL - SELECT all rows, but use Distinct on one column

I have a table data like this
Column_a | Column_b
a | 5
b | 25
g | 14
t | 13
b | 15
c | 04
g | 15
b | 13
in the column_a i have a lot of duplicate values, i want to be able to select all the rows from the table but if two rows have the same column_a value, i want only the row with the biggest value from colum_b in the results
exemple of the result that i am looking for :
Column_a | Column_b
a | 5
b | 25
t | 13
c | 04
g | 15
Thank you in advance
**
Update of the question
**
these are the columns i have in my table :
CRMID | user | ticket_id | | description | date | hour
what i am trying to do is to select all the rows from the table, but when two rows have the same ticket_id, i want only the newest one to appear in the results, so the row with the newest date and hour ,
Sorry for making this such complicated !
i am not a native english speaker and i find it hard to well explain the problem.
If you group by column_a then you can use aggregate functions like max() on it to get the maximum value of each group
select column_a,
max(column_b) as column_b
from your_table
group by column_a
SELECT Column_A, MAX(Column_B) FROM table
GROUP BY Column_A
You're looking for a Group By clause. Your syntax should look similar to this:
SELECT Column_A, MAX(Column_B)
FROM Table
GROUP BY Column_A
If you want to get all the columns in the table, then you have a different problem (and one not in the original posting). One reason you should add code into such a question is so you get a broader range of answers. I, for one, ignored the question, thinking it was just a newbie asking about obvious SQL functionality.
In MySQL the best approach is to use not exists:
select t.*
from table t
where not exists (select 1
from table t2
where t2.column_a = t.column_a and
t2.column_b > t.column_b
);
For optimal performance, you want an index on table(column_a, column_b). Also, this can return multiple rows, if there are duplicated maximum values.
This query is not intuitive. What it is doing is: "Get me all rows from the table where there is no other row with the same column_a value and a higher column_b value". If you think about it, this is the same as getting the maximum value. This has better performance than other methods (notably, aggregation and join), because MySQL does a simple index lookup for each row in the table. That is faster than aggregation and join.
select * from (select * from yourtable order by column_b desc)t group by column_a

Mode calculation without a subquery field in MySQL?

In my application, each product group has many products, and each product has one manufacturer. These relations are stored by MySQL in InnoDB tables product_groups with an id field, and products with id, product_group and manufacturer fields.
Is there a way to find the most common manufacturer in each product group, without resorting to selecting subqueries?
This is how I'm doing it currently:
SELECT product_groups.id,
(
SELECT manufacturer FROM products
WHERE product_group = product_groups.id
GROUP BY manufacturer
ORDER BY count(*) DESC
LIMIT 1
) manufacturer_mode
FROM product_groups;
Try this solution:
SELECT
a.product_group,
SUBSTRING_INDEX(GROUP_CONCAT(a.manufacturer ORDER BY a.occurrences DESC SEPARATOR ':::'), ':::', 1) AS manufacturer_mode
FROM
(
SELECT
aa.product_group,
aa.manufacturer,
COUNT(*) AS occurrences
FROM
products aa
GROUP BY
aa.product_group,
aa.manufacturer
) a
GROUP BY
a.product_group
Explanation:
This still uses a form of subquery, but one which executes only once as opposed to one that executes on a row-by-row basis such as in your original example.
It works by first selecting the product_group id, the manufacturer, and the count of how many times the manufacturer appears for each particular group.
The FROM sub-select will look something like this after execution (just making up data here):
product_group | manufacturer | occurrences
---------------------------------------------------
1 | XYZ | 4
1 | Test | 2
1 | Singleton | 1
2 | Eloran | 2
2 | XYZ | 1
Now that we have the sub-select result, we need to pick out the row that has the maximum in the occurences field for each product group.
In the outer query, we group the subselect once again by the product_group field, but this time, only the product_group field. Now when we do our GROUP BY here, we can use a really compelling function in MySQL called GROUP_CONCAT which we can use to concatenate the manufacturers together and in any order we want.
...GROUP_CONCAT(a.manufacturer ORDER BY a.occurrences DESC SEPARATOR ':::'...
What we are doing here is concatenating the manufacturers together that are grouped together per product_group id, the ORDER BY a.occurrences DESC makes sure that the manufacturer with the most appearances appears first in the concatenated list. Finally we are separating each manufacturer with :::. The result of this for product_group 1 will look like:
XYZ:::Test:::Singleton
XYZ appears first since it has the highest value in the occurance field. We only want to select XYZ, so we encase the concatenation within SUBSTRING_INDEX, which will allow us to only pick the first element of the list based on the ::: delimiter.
The end result will be:
product_group | manufacturer_mode
---------------------------------------
1 | XYZ
2 | Eloran