I currently have a MariaDB database that gets populated every day with different products (around 800) and also gets the price updates for these products.
I've created a view on top of the prices/products table that generates statistics such as the avg, mean and mode for the last 7, 15 and 30 days, and calculates the difference from today's price to the averages of 7, 15 and 30 days.
The problem is that whenever I run this view it takes almost 50 seconds to generate the data. I saw some comments about switching over to a calculated table, in which the calculations would be updated when new data is entered into the table, however I'm quite skeptical in doing that, as I'm inserting around 1000 price points at one specific time of the day that will impact all the calculations on the table. Is a calculated table something that updates only the rows that were updated, or it would recalculate everything? I'm worried about the overhead this might cause (memory is not an issue with the server).
I've pasted the products and prices tables and the view to DBFiddle, here: https://dbfiddle.uk/?rdbms=mariadb_10.2&fiddle=4cf594a85f950bed34f64d800601baa9
Calculations can be seen for product code 22141
Just to give an idea these are some of the calculations done by the view (available on the fiddle as well):
ROUND((((SELECT preconormal
FROM precos
WHERE codigowine = vinhos.codigowine
AND timestamp >= CURRENT_DATE - INTERVAL 9 HOUR) / (SELECT AVG(preconormal)
FROM precos
WHERE codigowine = vinhos.codigowine
AND timestamp >= CURRENT_DATE - INTERVAL 7 DAY) - 1) * 100), 2) as dif_7_dias,
ROUND((((SELECT preconormal
FROM precos
WHERE codigowine = vinhos.codigowine
AND timestamp >= CURRENT_DATE - INTERVAL 9 HOUR) / (SELECT AVG(preconormal)
FROM precos
WHERE codigowine = vinhos.codigowine
AND timestamp >= CURRENT_DATE - INTERVAL 15 DAY) - 1) * 100), 2) as dif_15_dias,
ROUND((((SELECT preconormal
FROM precos
WHERE codigowine = vinhos.codigowine
AND timestamp >= CURRENT_DATE - INTERVAL 9 HOUR) / (SELECT AVG(preconormal)
FROM precos
WHERE codigowine = vinhos.codigowine
AND timestamp >= CURRENT_DATE - INTERVAL 30 DAY) - 1) * 100), 2) as dif_30_dias
If switching to a calculated table, is there an optimal way to do this?
A "calculated table" isn't a MySQL / MariaDB feature. So I guess you mean another table derived from your raw data, that you use when you need those statistics.
You say the table is "populated every day...". Do you mean it's reloaded from scratch, or do you mean 800 more rows are added? By "every day" do you mean at a particular time of day, or ongoing throughout the day.
Do you always have to select all rows from your view, or can you sometimes do SELECT columns FROM view WHERE something = 'constant';' This matters because optimization techniques differ between the all-rows case and the few-rows case.
How can you handle this problem efficiently?
You could work to optimize the query used to define your view, making it faster. That is very likely a good approach.
MariaDB has a type of column known as a Persistent Computed Column. These are computed when rows are INSERTED or UPDATED. Then they are available for quick reference. But they have limitations; they cannot be defined with subqueries.
You could define an EVENT (a scheduled SQL job) to do the following.
Create a new, empty, "calculated" table with a name like tbl_new.
Use your (slow) view to insert the rows it needs.
Roll over your tables, so the new one replaces the current one and you keep a couple of older ones. This will give you a brief window where tbl doesn't exist.
DROP TABLE IF EXISTS tbl_old_2;
RENAME TABLE tbl_old TO tbl_old_2, tbl TO tbl_old, tbl_new TO tbl;
That's a whole boatload of correlated subqueries, crying out for appropriate indexing.
For a reasonable number of rows being returned by the query, the correlated subqueries can give reasonable performance. But if the outer query is returning thousands of rows, that will be thousands of executions of the subqueries.
I would tend to avoid running multiple SELECT against the same table, to get the last 7 days, the last 15 days, the last 30 days, and then repeating that to get AVG, repeating that to get MAX, and again to get MIN.
Instead, I would tend towards using conditional aggregation, to get all of the stats AVG, MAX, MIN, for all of the time periods 30 days, 15 days, and 7 days, in a single pass through the table.
... pause to note that views can be a problematic for performance; predicates from the outer query may not get pushed into the view query. We're not seeing what the whole view definition is doing, but I suspect we may be materializing a large set.
Consider a query like this:
SELECT ...
, ROUND( ( n.mal / a.avg_07_day - 1)*100 ,2) AS dif_7_dias
, ROUND( ( n.mal / a.avg_15_day - 1)*100 ,2) AS dif_15_dias
, ROUND( ( n.mal / a.avg_30_day - 1)*100 ,2) AS dif_30_dias
, ...
FROM vinhos
LEFT
JOIN ( SELECT h.codigowine
, AVG(IF( h.timestamp >= CURRENT_DATE + INTERVAL -30 DAY, h.preconormal, NULL)) AS avg_30_day
, MAX(IF( h.timestamp >= CURRENT_DATE + INTERVAL -30 DAY, h.preconormal, NULL)) AS max_30_day
, MIN(IF( h.timestamp >= CURRENT_DATE + INTERVAL -30 DAY, h.preconormal, NULL)) AS min_30_day
, AVG(IF( h.timestamp >= CURRENT_DATE + INTERVAL -15 DAY, h.preconormal, NULL)) AS avg_15_day
, MAX(IF( h.timestamp >= CURRENT_DATE + INTERVAL -15 DAY, h.preconormal, NULL)) AS max_15_day
, MIN(IF( h.timestamp >= CURRENT_DATE + INTERVAL -15 DAY, h.preconormal, NULL)) AS min_15_day
, AVG(IF( h.timestamp >= CURRENT_DATE + INTERVAL -7 DAY, h.preconormal, NULL)) AS avg_07_day
, MAX(IF( h.timestamp >= CURRENT_DATE + INTERVAL -7 DAY, h.preconormal, NULL)) AS max_07_day
, MIN(IF( h.timestamp >= CURRENT_DATE + INTERVAL -7 DAY, h.preconormal, NULL)) AS min_07_day
FROM precos h
GROUP
BY h.codigowine
HAVING h.codigowine IS NOT NULL
) a
ON a.codigowine = vinhos.codigowine
LEFT
JOIN ( SELECT s.codigowine
, MAX(s.precnormal) AS mal
, MIN(s.precnormal) AS mil
FROM precos s
WHERE s.timestamp >= CURRENT_DATE - INTERVAL 9 HOUR
GROUP
BY s.codigowine
HAVING s.codigowine IS NOT NULL
) n
ON n.codigowine = vinhos.codigowine
Consider the inline view query a.
Note that we can run that SELECT separately, and get a resultset returned, like we would return a result from a table. We expect this to do a single pass through the referenced table. There may be some predicates (conditions in the WHERE clause) that will filter our row, or enable us to make better use of an index. As currently written, the query could make use of an index with leading column of codigowine to avoid a (potentially expensive) "Using filesort" operation to satisfy the GROUP BY.
I'm a bit confused by the queries the - INTERVAL 9 HOUR. It looks to me like those subqueries could potentially return more than one row. There's no LIMIT clause (and no ORDER BY)... but it looks like we are expecting a single value (scalar), given the division operation.
Without an understanding of what we're trying to achieve there, not knowing the specification, I've wrapped my confusion and put that into another inline view n... not that this is what we want to do, but just to illustrate (again) an inline view returning a resultset. Whatever value(s) we're trying to get from the - INTERVAL 9 HOUR subquery, I think we can return those as a set as well.
With all that said, we can now get around to answering the question that was asked: adding a "calculated table".
If we don't require up to the second results, but can work with cached statistics, I would be looking at materializing the resultset from inline view a into a table, and then re-writing the query above to replace the inline view a with a reference to the cache table.
CREATE TABLE calc_stats_n_days
( codigowine <datatype> PRIMARY KEY
, avg_30_day DOUBLE
, max_30_day DOUBLE
, min_30_day DOUBLE
, avg_15_day DOUBLE
, ...
For the initial population...
INSERT INTO calc_stats_n_days
( codigowine, avg_30_day, maxg_30_day, min_30_day, avg_15_day, ... )
SELECT h.codigowine
, AVG(IF( h.timestamp >= CURRENT_DATE + INTERVAL -30 DAY, h.preconormal, NULL)) AS avg_30_day
, MAX(IF( h.timestamp >= CURRENT_DATE + INTERVAL -30 DAY, h.preconormal, NULL)) AS max_30_day
, MIN(IF( h.timestamp >= CURRENT_DATE + INTERVAL -30 DAY, h.preconormal, NULL)) AS min_30_day
, AVG(IF( h.timestamp >= CURRENT_DATE + INTERVAL -15 DAY, h.preconormal, NULL)) AS avg_15_day
, ...
For ongoing sync, I'd probably create a temporary table, populate it with the same query, and then do a sync between the temporary table and the target table. Maybe an INSERT ... ON DUPLICATE KEY and DELETE anti-join (to remove old rows).
Before considering other options, try and make the query more efficient. This is beneficial on the long term: even if you eventually move to a calculated table, you will still take advantage of a more efficient refresh query.
Your query has 15-20 inline subqueries that all address the same dependant table (as far as I read) and do aggregate computations for the same column precos(preconormal) (min, max, avg, most occuring value). Each metric is computed several times in a date range that varies from 9 hours back to 1 month back. So it goes:
SELECT
codigowine,
nomevinho,
DATE(timestamp) AS data_adc,
-- ...
/* Medidas estatĂsticas para 7 dias - min, max, media e moda */
ROUND(
(
SELECT MIN(preconormal)
FROM precos
WHERE
codigowine = vinhos.codigowine
AND timestamp >= CURRENT_DATE - INTERVAL 7 DAY
),
2
) AS min_7_dias,
ROUND(
(
SELECT MAX(preconormal)
FROM precos
WHERE
codigowine = vinhos.codigowine
AND timestamp >= CURRENT_DATE - INTERVAL 7 DAY
),
2
) AS max_7_dias,
-- ... and so on ...
FROM vinhos
It seems like it could be more efficient to do all computation at once, using conditional aggregation:
select
codigowine,
min(preconormal) min_30d
max(preconormal) max_30d,
avg(preconormal) avg_30d,
min(case when timestamp >= current_date - interval 15 day) min_15d,
max(case when timestamp >= current_date - interval 15 day) max_15d,
avg(case when timestamp >= current_date - interval 15 day) avg_15d,
min(case when timestamp >= current_date - interval 7 day) min_07d,
max(case when timestamp >= current_date - interval 7 day) max_07d,
avg(case when timestamp >= current_date - interval 7 day) avg_07d
from precos
where timestamp >= current_date - interval 30 day
group by codigowine
For performance, you want an index on (codigowine, timestamp, preconormal).
Then you can join it with the original table:
select
v.nomevinho,
date(v.timestamp) data_adc,
p.*
from vinhos v
inner join (
select
codigowine,
min(preconormal) min_30d
max(preconormal) max_30d,
avg(preconormal) avg_30d,
min(case when timestamp >= current_date - interval 15 day then preconormal end) min_15d,
max(case when timestamp >= current_date - interval 15 day then preconormal end) max_15d,
avg(case when timestamp >= current_date - interval 15 day then preconormal end) avg_15d,
min(case when timestamp >= current_date - interval 7 day then preconormal end) min_07d,
max(case when timestamp >= current_date - interval 7 day then preconormal end) max_07d,
avg(case when timestamp >= current_date - interval 7 day then preconormal end) avg_07d
from precos
where timestamp >= current_date - interval 30 day
group by codigowine
) p on p.codigowine = v.codigowine
This should be a sensible base query to build upon. To get the other computed values (most occuring value per period, latest value), you may add additional joins, or use inline queries.
To finish: here is another version of the base query, that aggregates after the join. Depending on how your data spreads across the two tables, this may, or may not be more efficient (and will not be equivalent if there are duplicates codigowine in table vinhos):
select
v.nomevinho,
date(v.timestamp) data_adc,
p.codigowine,
date(v.timestamp) data_adc,
min(p.preconormal) min_30d
max(p.preconormal) max_30d,
avg(p.preconormal) avg_30d,
min(case when p.timestamp >= current_date - interval 15 day then p.preconormal end) min_15d,
max(case when p.timestamp >= current_date - interval 15 day then p.preconormal end) max_15d,
avg(case when p.timestamp >= current_date - interval 15 day then p.preconormal end) avg_15d,
min(case when p.timestamp >= current_date - interval 7 day then p.preconormal end) min_07d,
max(case when p.timestamp >= current_date - interval 7 day then p.preconormal end) max_07d,
avg(case when p.timestamp >= current_date - interval 7 day then p.preconormal end) avg_07d
from vinhos v
inner join precos p
on p.codigowine = v.codigowine
and p.timestamp >= current_date - interval 30 day
group by v.codigowine, v.nomevinho
Looking at your query: Try refactoring it to eliminate as many dependent subqueries as possible, and instead JOINing to subqueries. Eliminating those dependent subqueries will make a vast performance difference.
Figuring the mode is an application of finding the detail record for an extreme value in a dataset. If you use this as a subquery
WITH freq AS (
SELECT COUNT(*) freq,
ROUND(preconormal, 2) preconormal,
codigowine
FROM precos
WHERE timestamp >= CURRENT_DATE - INTERVAL 7 DAY
GROUP BY ROUND(preconormal, 2), codigowine
),
most AS (
SELECT MAX(freq) freq,
codigowine
FROM freq
GROUP BY codigowine
),
mode AS (
SELECT GROUP_CONCAT(preconormal ORDER BY preconormal DESC) modeps,
freq.codigowine
FROM freq
JOIN most ON freq.freq = most.freq
GROUP BY freq.codigowine
)
SELECT * FROM mode
You can find the most frequent price for each item. The first CTE, freq, gets the prices and their frequencies.
The second CTE, most, finds the frequency of the most frequent price (or prices).
The third CTE, mode, extracts the most frequent prices from freq using a JOIN. It also uses GROUP_CONCAT() because it's possible to have more than one mode--most frequent price.
For your stats you can do this:
WITH s7 AS (
SELECT ROUND(MIN(preconormal), 2) minp,
ROUND(AVG(preconormal), 2) meanp,
ROUND(MAX(preconormal), 2) maxp,
codigowine
FROM precos
WHERE timestamp >= CURRENT_DATE - INTERVAL 7 DAY
GROUP BY codigowine
),
s15 AS (
SELECT ROUND(MIN(preconormal), 2) minp,
ROUND(AVG(preconormal), 2) meanp,
ROUND(MAX(preconormal), 2) maxp,
codigowine
FROM precos
WHERE timestamp >= CURRENT_DATE - INTERVAL 15 DAY
GROUP BY codigowine
),
s30 AS (
SELECT ROUND(MIN(preconormal), 2) minp,
ROUND(AVG(preconormal), 2) meanp,
ROUND(MAX(preconormal), 2) maxp,
codigowine
FROM precos
WHERE timestamp >= CURRENT_DATE - INTERVAL 30 DAY
GROUP BY codigowine
),
m7 AS (
WITH freq AS (
SELECT COUNT(*) freq,
ROUND(preconormal, 2) preconormal,
codigowine
FROM precos
WHERE timestamp >= CURRENT_DATE - INTERVAL 7 DAY
GROUP BY ROUND(preconormal, 2), codigowine
),
most AS (
SELECT MAX(freq) freq,
codigowine
FROM freq
GROUP BY codigowine
),
mode AS (
SELECT GROUP_CONCAT(preconormal ORDER BY preconormal DESC) modeps,
freq.codigowine
FROM freq
JOIN most ON freq.freq = most.freq
GROUP BY freq.codigowine
)
SELECT * FROM mode
)
SELECT v.codigowine, v.nomevinho, DATE(timestamp) AS data_adc,
s7.minp min_7_dias, s7.maxp max_7_dias, s7.meanp media_7_dias, m7.modeps moda_7_dias,
s15.minp min_15_dias, s15.maxp max_15_dias, s15.meanp media_15_dias,
s30.minp min_30_dias, s30.maxp max_30_dias, s30.meanp media_30_dias
FROM vinhos v
LEFT JOIN s7 ON v.codigowine = s7.codigowine
LEFT JOIN m7 ON v.codigowine = m7.codigowine
LEFT JOIN s15 ON v.codigowine = s15.codigowine
LEFT JOIN s30 ON v.codigowine = s30.codigowine
I'll leave it to you to do the modes for 15 and 30 days.
This is quite the query. You better hope the next guy to work on it doesn't curse your name. :-)
I have queries that I'm using to make a graph of earnings. But now people are able to earn from two different sources, so I want to separate this out into two lines on the same chart
This one for standard earnings:
SELECT DATE_FORMAT(earning_created, '%c/%e/%Y') AS day, SUM(earning_amount) AS earning_standard
FROM earnings
WHERE earning_account_id = ? AND earning_referral_id = 0 AND (earning_created > DATE_SUB(now(), INTERVAL 90 DAY))
GROUP BY DATE(earning_created)
ORDER BY earning_created
And this one for referral earnings:
SELECT DATE_FORMAT(e.earning_created, '%c/%e/%Y') AS day, SUM(e.earning_amount) AS earning_referral
FROM earnings AS e
INNER JOIN referrals AS r
ON r.referral_id = e.earning_referral_id
WHERE e.earning_account_id = ? AND e.earning_referral_id > 0 AND (e.earning_created > DATE_SUB(now(), INTERVAL 90 DAY)) AND r.referral_type = 0
GROUP BY DATE(e.earning_created)
ORDER BY e.earning_created
How do I get it to run the queries together, so that it outputs two columns/series for the y-axis: earning_standard and earning_referral.
But with them both aligned to the same day column/scale for the x-axis - substituting zero when there are no earnings for a specific series.
You'll need to set both of those queries as subqueries
SELECT DATE_FORMAT(earnings.earning_created, '%c/%e/%Y') AS day,
COALESCE(es.earning_standard, 0) AS earning_standard,
COALESCE(er.earning_referral, 0) AS earning_referral
FROM earnings
LEFT JOIN (SELECT DATE_FORMAT(earning_created, '%c/%e/%Y') AS day,
SUM(earning_amount) AS earning_standard
FROM earnings
WHERE earning_account_id = ?
AND earning_referral_id = 0
AND (earning_created > DATE_SUB(now(), INTERVAL 90 DAY))
GROUP BY DATE(earning_created)) AS es
ON (day = es.day)
LEFT JOIN (SELECT DATE_FORMAT(e.earning_created, '%c/%e/%Y') AS day,
SUM(e.earning_amount) AS earning_referral
FROM earnings AS e
INNER JOIN referrals AS r
ON r.referral_id = e.earning_referral_id
WHERE e.earning_account_id = ?
AND e.earning_referral_id > 0
AND (e.earning_created > DATE_SUB(now(), INTERVAL 90 DAY))
AND r.referral_type = 0
GROUP BY DATE(e.earning_created)) AS er
ON (day = er.day)
WHERE earnings.earning_account_id = ?
ORDER BY day
where I'm assuming earning_account_id = ? is intended to be with a question mark because the language you're using to run the query is replacing it with the actual id before running the query.
SELECT
COALESCE(t1.amount,0) AS link_earnings,
COALESCE(t2.amount,0) AS publisher_referral_earnings,
COALESCE(t3.amount,0) AS advertiser_referral_earnings,
t1.day AS day
FROM
(
SELECT DATE_FORMAT(earning_created, '%c/%e/%Y') AS day, SUM(earning_amount) AS amount
FROM earnings
WHERE earning_referral_id = 0
AND (earning_created > DATE_SUB(now(), INTERVAL 90 DAY))
AND earning_account_id = ?
GROUP BY DATE(earning_created)
) t1
LEFT JOIN
(
SELECT DATE_FORMAT(ep.earning_created, '%c/%e/%Y') AS day, (SUM(ep.earning_amount) * rp.referral_share) AS amount
FROM earnings AS ep
INNER JOIN referrals AS rp
ON ep.earning_referral_id = rp.referral_id
WHERE ep.earning_referral_id > 0
AND (ep.earning_created > DATE_SUB(now(), INTERVAL 90 DAY))
AND ep.earning_account_id = ?
AND rp.referral_type = 0
GROUP BY DATE(ep.earning_created)
) t2
ON t1.day = t2.day
LEFT JOIN
(
SELECT DATE_FORMAT(ea.earning_created, '%c/%e/%Y') AS day, (SUM(ea.earning_amount) * ra.referral_share) AS amount
FROM earnings AS ea
INNER JOIN referrals AS ra
ON ea.earning_referral_id = ra.referral_id
WHERE ea.earning_referral_id > 0
AND (ea.earning_created > DATE_SUB(now(), INTERVAL 90 DAY))
AND ea.earning_account_id = ?
AND ra.referral_type = 1
GROUP BY DATE(ea.earning_created)
) t3
ON t1.day = t3.day
ORDER BY day
Seems to run ok....
You can simply use an outer join to retain earnings even when there is no matching referral, and then conditionally sum depending on whether a referral exists or not:
SELECT DATE_FORMAT(e.earning_created, '%c/%e/%Y') AS day,
SUM(IF(r.referral_id IS NULL, e.earning_amount, 0)) earning_standard,
SUM(IF(r.referral_id IS NULL, 0, e.earning_amount)) earning_referral
FROM earnings e LEFT JOIN referrals r ON r.referral_id = e.earning_referral_id
WHERE e.earning_account_id = ?
AND e.earning_created > CURRENT_DATE - INTERVAL 90 DAY
AND (r.referral_id IS NULL OR r.referral_type = 0)
GROUP BY 1
ORDER BY 1
I've assumed here that earnings.earning_referral_id is never negative, though you can add an explicit test to filter such records if so desired.
I've also changed the filter on earnings.earning_created to base from CURRENT_DATE rather than NOW() to ensure that any earnings created earlier than the current time on the first day of the series are still included—this would typically be what one actually wants, but feel free to change back if not.