I have a subquery that aggregates some UNION ALL selects. Over that I prepare the SELECT to create cross-tab and limit it to let's say 20. I would like to be able to retrieve the total COUNT of sub query results before I am limiting them in main query. This is for the purpose of trying to build a pagination that receives the total number of records and then the specific page record grid.
Sample query:
SELECT
name,
sumIf(metric_value, metric_name = 'data') AS data,
sumif(....
FROM
(SELECT
name, metric_name, SUM(metric_value) as metric_value
FROM
(SELECT
name, 'data' AS metric_name, SUM(data) AS metric_value
FROM
table
WHERE
date > '2017-01-01 00:00:00'
GROUP BY
name
UNION ALL
SELECT
name, 'data' AS metric_name, SUM(data) AS metric_value
FROM
table2
WHERE
date > '2017-01-01 00:00:00'
GROUP BY
name
UNION ALL
SELECT
name, 'data' AS metric_name, SUM(data) AS metric_value
FROM
table3
WHERE
date > '2017-01-01 00:00:00'
GROUP BY
name
UNION ALL
.
.
.)
GROUP BY
name, metric_name)
GROUP BY
name
ORDER BY
name ASC
LIMIT 0,20;
The first subselect returns tons of data, so I thought I can count it and return as one column value, or row and it would propagate to main select that limits 20 results. Because I need to know the entire set of results but don;t want to call the same query twice without limit and with limit just to get COUNT. There are at least 12 UNION ALL third level sub selects, so why waste resources. I am looking to try generic SQL solutions not necessarily related to ClickHouse
I was thinking of using count(*) OVER (), however that is not supported, so if thats only option I know I need to run query twice.
The first thing that one should mention is that nobody is usually interested in the exact number of pages on a query. It can be easily estimated and almost no one will care how exact is the estimation. However, if you have a link to the last page in your GUI, people will often click to link just to see whether it works.
Nevertheless, there are cases when an analyst should visit all the pages, and then the GUI should display the exact amount of work. A good news is that in that latter case, a better strategy is to cache a snapshot of the whole results table and counting the rows in the table becomes not a problem anymore.
I mean, it makes sense to discuss with the customers whether they really need it, because unneeded full scans many times per day may have effect on the database load and billing sums.
Anyway, if you still need to estimate the number of rows, you can simplify the query just to count the number of rows. As I understand this is something like:
SELECT SUM(cnt) as row_count
FROM (
SELECT COUNT(DISTINCT name) as cnt FROM table1 WHERE date > ...
UNION ALL
SELECT COUNT(DISTINCT name) as cnt FROM table2 WHERE date > ...
...
) as counts;
or if data is a constant metric name
SELECT COUNT(DISTINCT name) as row_count
FROM (
SELECT DISTINCT name FROM table1 WHERE date > ...
UNION ALL
SELECT DISTINCT name FROM table2 WHERE date > ...
...
) as names;
Related
I have read through quite a few posts with greatest-n-per-group but still don't seem to find a good solution in terms of performance. I'm running 10.1.43-MariaDB.
I'm trying to get the change in data values in given time frame and so I need to get the earliest and latest row from this period. The largest number of rows in a time frame that needs to be calculated right now is around 700k and it's only going to be growing. For now I have just resulted into doing two queries, one for the latest and one for the earliest date, but even this has slow performance on currently. The table looks like this:
user_id data date
4567 109 28/06/2019 11:04:45
4252 309 18/06/2019 11:04:45
4567 77 18/02/2019 11:04:45
7893 1123 22/06/2019 11:04:45
4252 303 11/06/2019 11:04:45
4252 317 19/06/2019 11:04:45
The date and user_id columns are indexed. Without ordering the rows aren't in any particular order in the database if that makes a difference.
The furthest I have gotten with this issue is query like this for year period currently (700k datapoints):
SELECT user_id,
MIN(date) as date, data
FROM datapoint_table
WHERE date >= '2019-01-14'
GROUP BY user_id
This gives me the right date and user_id in around very fast in around ~0.05s. But like the common issue with the greatest-n-per-group is, the rest of the row (data in this case) is not from the same row with date. I have read about other similar questions and tried with subquery like this:
SELECT a.user_id, a.date, a.data
FROM datapoint_table a
INNER JOIN (
SELECT datapoint_table.user_id,
MIN(date) as date, data
FROM datapoint_table
WHERE date >= '2019-01-01'
GROUP BY user_id
) b ON a.user_id = b.user_id AND a.date = b.date
This query takes around 15s to complete and gets the correct data value. The 15s tho is just way too long and I must be doing something wrong when the first query is so fast. I also tried doing (MAX)-(MIN) for the data with group by for user_id but it also had slow performance.
What would be more efficient way of getting the same data value as the date or even the difference in latest and earliest data for each user?
Assuming you are using a fairly recent version of either MariaDB or MySQL, then ROW_NUMBER would probably be the most efficient way to find the earliest record for each user:
WITH cte AS (
SELECT *, ROW_NUMBER() OVER (PARTITION BY user_id ORDER BY date) rn
FROM datapoint_table
WHERE date > '2019-01-14'
)
SELECT user_id, data, date
FROM cte
WHERE rn = 1;
To the above you could also consider adding the following index:
CREATE INDEX ON datapoint_table (user_id, date);
You could also try the following variant index with the columns reversed:
CREATE INDEX ON datapoint_table (date, user_id);
It is not clear which version of the index would perform the best, which would depend on your data and the execution plan. Ideally one of the above two indices would help the database execute ROW_NUMBER, along with the WHERE clause.
If your database version does not support ROW_NUMBER, then you may continue with your current approach:
SELECT d1.user_id, d1.data, d1.date
FROM datapoint_table d1
INNER JOIN
(
SELECT user_id, MIN(date) AS min_date
FROM datapoint_table
WHERE date > '2019-01-14'
GROUP BY user_id
) d2
ON d1.user_id = d2.user AND d1.date = d2.min_date
WHERE
d1.date > '2019-01-14';
Again, the indices suggested should at least speed up the execution of the GROUP BY subquery.
I have table like this
enter image description here
I need to get the data only whose age > 10, along with that i need to get the total number of records present in the table. ie. in this example it is 4 records. what i need is in single query i need to get the total number of records present in table and columns which i query.
Query will be somewhat like
SELECT ID, NAME, count(TOTAL NUMBER OF RECORDS IN TABLE) as Count from MYTABLE WHERE AGE > 10
Any idea about this ?
You can use a subquery in the FROM clause:
SELECT ID, NAME, c.cnt as Count
FROM MYTABLE CROSS JOIN
(SELECT COUNT(*) as cnt FROM MYTABLE) c
WHERE AGE > 10 ;
Both databases support window functions, but they are not really helpful here, because the count is not filtered in the same way as the outer query. If you do want the filter for both, then in the most recent versions you can do:
SELECT ID, NAME, COUNT(*) OVER () as cnt
FROM MYTABLE
WHERE AGE > 10 ;
You can try below - using scalar subquery
SELECT ID, NAME, age,(select count(*) from mytable WHERE AGE > 10) as Count
from MYTABLE
WHERE AGE > 10
As the title indicates, I am trying to find the maximum summed value in column C for an object in column A based on a subset of column B over a period of time (let's say column D). My current query looks something like this in which I return the summed values greater than 10,000.
select id_a, id_b, sum(column_c) from master_table where id_b in (1,2,3,4,5)
and ymdh >= '2017-11-01' group by 1,2 having sum(column_c) > 10000 order by 2,3
desc;
What I'm trying to get returned is the greatest value from sum(column_c). I tried using both the max() and distinct() functions. Specifically using max(sum(imps)), but aggregate function calls many not be nested. Would anyone be able to provide guidance here?
You can use a FROM ( select ) T
select max(my_sum)
from (
select id_a
, id_b
, sum(column_c) my_sum
from master_table
where id_b in (1,2,3,4,5)
and ymdh >= '2017-11-01'
group by 1,2 having my_sum > 10000
order by 2,3 desc;
) T
Does this do what you want?
select id_a, id_b, sum(column_c)
from master_table
where id_b in (1,2,3,4,5) and
ymdh >= '2017-11-01'
group by id_a, id_b
having sum(column_c) > 10000
order by sum(column_c) desc
limit 1;
That is, use order by and limit to get the value you want. (This query includes the group by keys as well, but that is not necessary.)
scaisEdge has the answer (and my +1) - but I just wanted to add a bit about the thought process when designing an SQL statement like you're working on.
Don't feel you need to compose the whole thing - that it's one big statement, or that it's one single query.
Instead, you'll often need to break up the problem into steps, solve the individual steps, and then use those steps as sources for a query - because you don't have to use tables in the FROM clause; you can use your own subqueries instead.
So for this problem? You've got the first step done - you figured out how to write the query that gets the Sum over a particular grouping:
select someCol, sum(otherCol) as groupSum from myTable
group by someCol
Great! Now, you can effectively use this like it's a table:
select someCol, groupSum
from
(
select someCol, sum(otherCol) as groupSum from myTable
group by someCol
) mySubquery
And in your case, you want to get the maximum sum?
select max(groupSum)
from
(
select someCol, sum(otherCol) as groupSum from myTable
group by someCol
) mySubquery
Not only will this help while composing the full SQL statement, it'll actually help the person trying to read/debug it down the line, especially if you name your subqueries/columns well:
select max(totalHitsForWeek) as maxWeeklyUsage
from
(
select week, sum(hits) as totalHitsForWeek
from requestsTable
) hitsPerWeekSubquery
Hope that helps add to scaisEdge's answer! :-)
I have the following query:
SELECT t.ID, t.caseID, time
FROM tbl_test t
INNER JOIN (
SELECT ID, MAX( TIME )
FROM tbl_test
WHERE TIME <=1353143351
GROUP BY caseID
ORDER BY caseID DESC -- ERROR HERE!
) s
USING (ID)
It seems that I only get the correct result if I use the ORDER BY in the inner join. Why is that? I am using the ID for the join, so the order should take no effekt.
If I remove the order by, I get too old entries from the database.
ID is the primary key, the caseID is a kind of object with multiple entries with different timestamps.
This query is ambiguous:
SELECT ID, MAX( TIME )
FROM tbl_test
WHERE TIME <=1353143351
GROUP BY caseID
It's ambiguous because it does not guarantee that it returns the ID of the row where the MAX(TIME) occurs. It returns the MAX(TIME) for each distinct value of caseID, but the value of other columns (like ID) is chosen arbitrarily from members of the group.
In practice, MySQL chooses the row that it finds first in the group as it scans rows in storage order.
Example:
caseID ID time
1 10 15:00
1 12 18:00
1 14 13:00
The max time is 18:00, which is the row with ID 12. But the query will return ID 10, simply because it's the first one in the group. If you were to reverse the order with ORDER BY, it would return ID 14. Still not the row where the max time is found, but it's from the other end of the group of rows.
Your query works with ORDER BY caseID DESC because, by coincidence, your Time values increase with the increasing ID.
This sort of query is actually an error in standard SQL and most other brands of SQL database. MySQL permits it, trusting that you know how to form an unambiguous query.
The fix is to use columns in the select-list only if they are unambiguous, that is, if they are in the GROUP BY clause, then each group is guaranteed to have only one distinct value:
SELECT caseID, MAX( TIME )
FROM tbl_test
WHERE TIME <=1353143351
GROUP BY caseID
SELECT t.ID, t.caseID, time
FROM tbl_test t
INNER JOIN (
SELECT caseID, MAX( TIME ) maxtime
FROM tbl_test
WHERE TIME <=1353143351
GROUP BY caseID
) s
ON t.caseID = s.caseID and t.time = s.maxtime
You are seeing that issue because you are getting the MAX(TIME) per caseID, but since you are grouping by caseID and NOT ID, you are getting an arbitrary ID. That happens because when you use an aggregate function, like MAX, you must, for every non-grouped field in the select specify how you want to aggregate it. That means, if it's in the SELECT and NOT in the GROUP BY, you have to tell MySQL how to aggregate. If you don't then you get a RANDOM row (well, not random per se, but it's not going to be in an order that you necessarily expect).
The reason ORDER BY is working for you, is that it kind of tricks the query optimizer into sorting the results before grouping, which just so happens to produce the result you want, but be warned, that will not always be the case.
What you want is the ID that has the MAX(TIME) given a caseID. Which means your INNER join needs to connect by caseID (not ID) and time (which will give you 1 row per each 1 row in the outer table).
Barmar beat me to the actual query, but that's the way you want to go.
I would like to determine two things from a single query:
Most prevalent column in a table
The amount of times such column was located upon querying the table
Example Table:
user_id some_field
1 data
2 data
1 data
The above would return user_id # 1 as being the most prevalent in the table, and it would return (2) for the total amount of times that it was located in the table.
I have done my research and I came across two types of queries.
GROUP BY user_id ORDER BY COUNT(*) DESC
SUM
The problem is that I can't figure out how to use these two queries in conjunction with one another. For example, consider the following query which successfully returns the most prevalent column.
$top_user = "SELECT user_id FROM table_name GROUP BY user_id ORDER BY COUNT(*) DESC";
The above query returns "1" based on the example table shown above. Now, I would like to be able to return "2" for the total amount of times the user_id (1) was found in the table.
Is this by any chance possible?
Thanks,
Evan
You can include count(*) in the SELECT list:
SELECT user_id, count(*) as totaltimes from table_name
GROUP BY user_id ORDER BY count(*) DESC;
If you want only the first one:
SELECT user_id, count(*) as totaltimes from table_name
GROUP BY user_id ORDER BY count(*) DESC LIMIT 1;