I've looked all over and haven't yet found an intelligent way to handle this, though I feel sure one is possible:
One table of historical data has quarterly information:
CREATE TABLE Quarterly (
unique_ID INT UNSIGNED NOT NULL,
date_posted DATE NOT NULL,
datasource TINYINT UNSIGNED NOT NULL,
data FLOAT NOT NULL,
PRIMARY KEY (unique_ID));
Another table of historical data (which is very large) contains daily information:
CREATE TABLE Daily (
unique_ID INT UNSIGNED NOT NULL,
date_posted DATE NOT NULL,
datasource TINYINT UNSIGNED NOT NULL,
data FLOAT NOT NULL,
qtr_ID INT UNSIGNED,
PRIMARY KEY (unique_ID));
The qtr_ID field is not part of the feed of daily data that populated the database - instead, I need to retroactively populate the qtr_ID field in the Daily table with the Quarterly.unique_ID row ID, using what would have been the most recent quarterly data on that Daily.date_posted for that data source.
For example, if the quarterly data is
101 2009-03-31 1 4.5
102 2009-06-30 1 4.4
103 2009-03-31 2 7.6
104 2009-06-30 2 7.7
105 2009-09-30 1 4.7
and the daily data is
1001 2009-07-14 1 3.5 ??
1002 2009-07-15 1 3.4 &&
1003 2009-07-14 2 2.3 ^^
then we would want the ?? qtr_ID field to be assigned '102' as the most recent quarter for that data source on that date, and && would also be '102', and ^^ would be '104'.
The challenges include that both tables (particularly the daily table) are actually very large, they can't be normalized to get rid of the repetitive dates or otherwise optimized, and for certain daily entries there is no preceding quarterly entry.
I have tried a variety of joins, using datediff (where the challenge is finding the minimum value of datediff greater than zero), and other attempts but nothing is working for me - usually my syntax is breaking somewhere. Any ideas welcome - I'll execute any basic ideas or concepts and report back.
Just subquery for the quarter id using something like:
(
SELECT unique_ID
FROM Quarterly
WHERE
datasource = ?
AND date_posted >= ?
ORDER BY
unique_ID ASC
LIMIT 1
)
Of course, this probably won't give you the best performance, and it assumes that dates are added to Quarterly sequentially (otherwise order by date_posted). However, it should solve your problem.
You would use this subquery on your INSERT or UPDATE statements as the value of your qtr_ID field for your Daily table.
The following appears to work exactly as intended but it surely is ugly (with three calls to the same DATEDIFF!!), perhaps by seeing a working query someone might be able to further reduce it or improve it:
UPDATE Daily SET qtr_ID = (select unique_ID from Quarterly
WHERE Quarterly.datasource = Daily.datasource AND
DATEDIFF(Daily.date_posted, Quarterly.date_posted) =
(SELECT MIN(DATEDIFF(Daily.date_posted, Quarterly.date_posted)) from Quarterly
WHERE Quarterly.datasource = Daily.datasource AND
DATEDIFF(Daily.date_posted, Quarterly.date_posted) > 0));
After more work on this query, I ended up with enormous performance improvements over the original concept. The most important improvement was to create indices in both the Daily and Quarterly tables - in Daily I created indices on (datasource, date_posted) and (date_posted, datasource) USING BTREE and on (datasource) USING HASH, and in Quarterly I did the same thing. This is overkill but it made sure I had an option that the query engine could use. That reduced the query time to less than 1% of what it had been. (!!)
Then, I learned that given my particular circumstances I could use MAX() instead of ORDER BY and LIMIT so I use a call to MAX() to get the appropriate unique_ID. That reduced the query time by about 20%.
Finally, I learned that with the InnoDB storage engine I could segment the chunk of the Daily table that I was updating with any one query, which allowed me to multi-thread the queries with a little elbow-grease and scripting. The parallel processing worked well and every thread reduced the query time linearly.
So, the basic query that is performing literally 1000 times better than my own first attempt is:
UPDATE Daily
SET qtr_ID =
(
SELECT MAX(unique_ID)
FROM Quarterly
WHERE Daily.datasource = Quarterly.datasource AND
Daily.date_posted > Quarterly.dateposted
)
WHERE unique_ID > ScriptVarLowerBound AND
unique_ID <= ScriptVarHigherBound
;
Related
I'm trying to build a reporting table to track server traffic and popularity overall. Each SID is a unique game server hosting a particular game, and each UCID is a unique player key connecting to that server.
Say I have a table like so:
SID UCID AvgTime NumConnects
-----------------------------------------
1 AIE9348ietjg 300.55 5
1 Po328gieijge 500.66 7
2 AIE9348ietjg 234.55 3
3 Po328gieijge 1049.88 18
We can see that there are 2 unique players, and 3 unique servers, with SID 1 having 2 players that have connected to it at some point in the past. The AvgTime is the average amount of time those players spent on that server (in seconds), and the NumConnects is the size of the average (ie. 300.55 is averaged out of 5 elements).
Now I run a job in the background where I process a raw connection table and pull out player connections like so:
SID UCID ConnectTime DisconnectTime
-----------------------------------------
1 AIE9348ietjg 90.35 458.32
2 Po328gieijge 30.12 87.15
2 AIE9348ietjg 173.12 345.35
This table has no ID or other fluff to help condense my example. There may be multiple connect/disconnect records for multiple players in this table. What I want to do is add to my existing AvgTime for each SID these new values.
There is a formula from here I am trying to use (taken from this math stackexchange: https://math.stackexchange.com/questions/1153794/adding-to-an-average-without-unknown-total-sum/1153800#1153800)
Average = (Average * Size + NewValue) / Size + 1
How can I write an update query to update each ServerIDs traffic table above, and add to the average using the above formula for each pair of records. I tried something like the following but it didn't work (returned back null):
UPDATE server_traffic st
LEFT JOIN connect_log l
ON st.SID = l.SID AND st.UCID = l.UCID
SET AvgTime = (AvgTime * NumConnects + SUM(l.DisconnectTime - l.ConnectTime) / NumConnects + COUNT(l.UCID)
I would prefer an answer in MySql, but I'll accept MS SQL as well.
EDIT
I understand that statistics and calculations are generally not to be stored in tables and that you can run reports that would crunch the numbers for you. My requirement is that users can go to a website and view the popularity of various servers. This needs to be done in a way that
A: running a complex query per user doesn't crash or slow down the system
B: the page returns the data within a few seconds at most
See this example here: https://bf4stats.com/pc/shinku555555
This is a web page for battlefield 4 stats - notice that the load is almost near instant for this player, and I get back a load of statistics without waiting for some complex report query to return the data. I'm assuming they store these calculations in preprocessed tables where the webpage just needs to do a simple select to return back the values. That's the same approach I want to take with my Database and Web Application design.
Sorry if this is off topic to the original question - but hopefully this adds additional context that helps people understand my needs.
Since you cannot run aggregate functions like SUM and COUNT by themselves at the unit level in SQL but contained in an aggregate query, consider joining to an aggregate subquery for the UPDATE...LEFT JOIN. Also, adjust parentheses in SET to match above formula.
Also, note that since you use LEFT JOIN, rows with non-match IDs will render NULL for aggregate fields and this entity cannot be used in arithmetic operations and will return NULL. You can convert to zero with IFNULL() but may fail with formula's division.
UPDATE server_traffic s
LEFT JOIN
(SELECT SID, UCID, COUNT(UCID) As GrpCount,
SUM(DisconnectTime - ConnectTime) AS SumTimeDiff
FROM connect_log
GROUP BY SID, UCID) l
ON s.SID = l.SID AND s.UCID = l.UCID
SET s.AvgTime = (s.AvgTime * s.NumConnects + l.SumTimeDiff) / s.NumConnects + l.GrpCount
Aside - reconsider saving calculations/statistics within tables as they can always be run by queries even by timestamps. Ideally, database tables should store raw values.
When subtracting the previous row from the current row the query is too slow, is there a more efficient way to do this?
I am trying to create a data filter which has the capacity to highlight events which occur sequentially to those that do not. I have a table of machine operational data 'source' which is ordered chronologically. Using a WHERE clause I filter out the data which is of less relevance to this particular analysis. The remaining data is inserted into a new table 'filtered'. Using the inserted ID numbers from 'source' I compare each row with its proceeding row to find the difference in value – if the difference is 1 then then the events have occurred in sequence and if the difference is null then they have not. My problem is with the length of time it takes to compare a row with the previous row. I have reduced my data volume to just 2.5% (275000 rows) of what it full volume will be and the query takes 3012 seconds according to the MySQL Workbench action output. I have experimented with structuring the query differently but ultimately have reached dead ends. So my question is – Is there a more efficient way to compare a row with its previous row ?
OK – here are some more details.
/*First I create the table for the filtered data */
drop table if exists filtered_dta;
create table filtered_dta
(
ID int (11) not null auto_increment,
IDx1 int (11),
primary key (ID)
);
/Then I insert the filtered data/
insert into filtered_dta (IDx1)
select seq from source
WHERE range_value < -1.75
and range_value > -5 ;
/* Then I compare each row with its previous */
select t1.ID, t1.IDx1,(t1.IDx1-t2.IDx1)
as seq_value
from filtered_dta t1
left outer join filtered_dta t2
on t1.IDx1 = t2.IDx1+1
order by IDx1
;
Here are sample tables.
Table - filtered_dta Results
| ID | IDx1 | | ID | IDx1 | seq_value |
1 3 1 3 null
2 4 2 4 1
3 7 3 7 null
4 12 4 12 null
5 13 5 13 1
6 14 6 14 1
A full data set from the source table is expected to be between 3 and 10 million rows. The database will create and use about 50 tables. This database is being used as a back end engine for simulation software which does not have the capacity to process this amount of data and give an appropriate analysis of the system which the data represents.
I have spent some time on the issue and have come across the following;
It may be possible that the find_seq table is creates with myISAM and requires converting to an innoDB table. I tried to set the default engine to innoDB but seen no noticeable differences.
This question was similar in its problem of a slow query MySQL query painfully slow on large data - but its issue lay in having a function in a where clause – from my action output I can see the where clause is not too slow.
I would appreciate any input anyone may have on this. Also I am not a proficient user of MySQL so if possible give details.
Kind regards.
You can use something like this template to identify sequential "islands" without a self-join:
SELECT #island := #island + IF(seqId <> #lastSeqId + 1, 1, 0) AS island
, orderQ.[fieldsYouWant]
, #lastSeqId := seqId
FROM (
SELECT [fieldsYouWant], [sequentialIdentifier] AS seqId
FROM [theTable] AS t
, (SELECT #island := 0, #lastSeqId := [somethingItCannotBe]) AS init_dnr -- Initializes variables, do not reference
WHERE [filteringConditionsMet]
ORDER BY [orderingCriteria]
) AS orderingQ
;
I tried keeping it as generic as possible, but you'll note I had to revert to the assumption that seqId was numeric and expected to increment by one. Conditions in the island calculation can be much more complicated if needed (for cases such as where (A, 1), (A, 2), (B, 3) should be two islands based on the sequence not being defined by a single value).
You can take this template further, to identify "island" boundaries and sizes by simple making the above query as subquery for something like:
SELECT island, MIN(seqId), MAX(seqId), COUNT(seqId)
FROM ([above query]) AS islandQ
GROUP BY island
;
This might sound like a silly question but here it is; I am sure it has happened to anyone around here, you build a web app with a db structure per specifications (php/mysql), but then the specs change slightly and you need to make the change in the db to reflect it, here is a short example:
Order table
->order id
->user id
->closed
->timestamp
but because the orders are paid in different currency than in the one, which is quoted in the db, I need to add the field exchange rate, which is only checked and known when closing the order, not upon insertion of the record. Thus I can either add the new field to the current table, and leave it null/blank when inserting, and then update when necessary; or I can create a new table with the following structure:
Order exchange rates
->exchange id
->order id
->exchange rate
Although I believe that the letter is better because it is a less intrusive change, and won't affect the rest of the application functionality, you could end up with insane amount of joined queries to get all the information necessary. On the other hand the former approach could mess up some other queries you have in the db, but it is definitely more practical and also logical in terms of the overall db structure. Also I don't think that it is a good practice to use the structure of insert null and update later, but that might be just my lonely opinion...
Thus I would like to ask what do you think is the preferable approach.
I'm thinking of another alternative. Setup an exchange rate table like:
create table exchange_rate(
cur_code_from varchar2(3) not null
,cur_code_to varchar2(3) not null
,valid_from date not null
,valid_to date not null
,rate number(20,6) not null
);
alter table exchange_rate
add constraint exchange_rate_pk
primary key(cur_code_from, cur_code_to, valid_from);
The table should hold data that looks something like:
cur_code_from cur_code_to valid_from valid_to rate
============= =========== ========== ======== ====
EUR EUR 2014-01-01 9999-12-31 1
EUR USD 2014-01-01 9999-12-31 1,311702
EUR SEK 2014-01-01 2014-03-30 8,808322
EUR SEK 2014-04-01 9999-12-31 8,658084
EUR GBP 2014-01-01 9999-12-31 0,842865
EUR PLN 2014-01-01 9999-12-31 4,211555
Note the special case when you convert from and to the same currency.
From a normalization perspective, you don't need valid_to since it can be computed from the next valid_from, but from a practical point of view, it's easier to work with a valid-to-date than using a sub-query every time.
Then, to convert into the customers currency you would join with this table:
select o.order_value * x.rate as value_in_customer_currency
from orders o
join exchange_rate_t x on(
x.cur_code_from = 'EUR' -- Your- default currency here
and x.cur_code_to = 'SEK' -- The customers currency here
and o.order_close_date between x.valid_from and x.valid_to
)
where o.order_id = 1234;
Here I have used the rates valid as of the order_close_date. So if you have two orders, one with a close date of 2014-02-01, then it would pick up a different rate than an order with a close date of 2014-04-05.
I think you just need to add exchange_rate_id in the order table and create a look up table Exchange_Rates with columns ex_rate_id, description , deleted, created_date.
So when an order closes you just need to update the exchange_rate_id column in order table with id and later on you can create a join with the look up table to pull records.
Keep in mind that
one order have only one currency upon closing.
one currency can be updated against one or many orders
It is a one to many relationship, so i don't think that you have to make a separate table for that. If you do so I think that will consider in extra normalization.
I've currently got a table as follows,
Column Type
time datetime
ticket int(20)
agentid int(20)
ExitStatus varchar(50)
Queue varchar(50)
I want to write a query which will break this down by week, providing a column with a count for each ExitStatus. So far I have this,
SELECT ExitStatus,COUNT(ExitStatus) AS ExitStatusCount, DAY(time) AS TimePeriod
FROM `table`
GROUP BY TimePeriod, ExitStatus
Output:
ExitStatus ExitStatusCount TimePeriod
NoAgentID 1 4
Success 3 4
NoAgentID 1 5
Success 5 5
I want to change this so it returns results in this format:
week | COUNT(NoAgentID) | COUNT(Success) |
Ideally, I'd like the columns to be dynamic as other ExitStatus values may be possible.
This information will be formatted and presented to end user in a table on a page. Can this be done in SQL or should I reformat it in PHP?
There is no "general" solution to your problem (called cross tabulation) that can be achieved with a single query. There are four possible solutions:
Hardcode all possible ExitStatus'es in your query and keep it updated as you see the need for more and more of them. For example:
SELECT
Day(Time) AS TimePeriod,
SUM(IF(ExitStatus = 'NoAgentID', 1, 0)) AS NoAgentID,
SUM(IF(ExitStatus = 'Success', 1, 0)) AS Success
-- #TODO: Add others here when/if needed
FROM table
WHERE ...
GROUP BY TimePeriod
Do a first query to get all possible ExitStatus'es and then create your final query from your high-level programming language based on those results.
Use a special module for cross tabulation on your high-level programming language. For Perl, you have the SQLCrossTab module but I couldn't find one for PHP
Add another layer to your application by using OLAP (multi-dimensional views of your data) like Pentaho and then querying that layer instead of your original data
You can read a lot more about these solutions and an overall discussion of the subject
This is one way; you can use SUM() to count the number of items a particular condition is true. At the end you just group by the time as per normal.
SELECT DAY(time) AS TimePeriod,
SUM('NoAgentID' = exitStatus) AS NoAgentID,
SUM('Success' = exitStatus) AS Success, ...
FROM `table`
GROUP BY TimePeriod
Output:
4 1 3
5 1 5
The columns here are not dynamic though, which means you have to add conditions as you go along.
SELECT week(time) AS week,
SUM(ExitStatus = 'NoAgentID') AS 'COUNT(NoAgentID)',
SUM(ExitStatus = 'Success') AS 'COUNT(Success)'
FROM `table`
GROUP BY week
I'm making some guesses about how ExitStatus column works. Also, there are many ways of interpretting "week", such as week of year, of month, or quarter, ... You will need to put the appropriate function there.
in finance, a stock's beta is the covariance between the stock's daily returns and an index' daily returns divided by the variance of the index daily returns. I try to calaculate beta for set of stocks and a set of indices.
Here's my query for a 50 business day rolling window and I'd like you to help me optimize it for speed:
INSERT INTO betas (permno, index_id, DATE, beta)
(SELECT
permno, index_id, s.date, IF(
s.`seq` >= 50,
(SELECT
(AVG(s2.log_return*i2.log_return)-AVG(s2.log_return)*AVG(i2.log_return))/VAR_POP(i2.log_return) AS beta
FROM
stock_series s2
INNER JOIN `index_series` i2 ON i2.date=s2.date
WHERE i2.index_id=i.index_id AND s2.permno = s.permno
AND s2.`seq` BETWEEN s.`seq` - 49 AND s.`seq`
GROUP BY index_id,permno), NULL)
AS beta
FROM
stock_series s
INNER JOIN `index_series` i ON i.index_id IN ('SP500') AND i.date=s.date
)
ON DUPLICATE KEY
UPDATE beta= VALUES (beta)
Both main tables are already ordered by entity and date in ascending order, and they already include log daily returns as well as a "seq" column. Seq sequentally enumerates all daily rows company- (or index-) wise, i.e. seq starts over at 1 for every new stock or index in the table and counts up to the number of total number of rows for a given entity. I created it to allow for the rolling window.
As of now, with 500 firms and 1 index, the query takes like forever to complete.
Let me know any optimization that comes to your mind, like views, stored procs, temp tables, and if you find any inconsistencies, of course.
EDIT: Indexes:
stock_series has PRIMARY KEY (permno,date) and UNIQUE KEY (permno,seq),
index_series has PRIMARY KEY (index_id,date)
EXPLAIN EXTENDED results for ONE company (by including a WHERE s.permno=... restriction at the end):
EXPLAIN EXTENDED results for ALL ~500 companies:
Here i what the pros do: do NOT calcualte that in the databae. Pull data, calculate, reinsert. Where I work now they have a hugh grid doing that stuff in the end of day run. Yes, gris - like in a significant number of machines. We talk of producing gigabytes of csv files that then get reloaded into the database. Beta, Gamma, PnL on trades with 120.000 different elements. Databaes are NOT optimized for this.