I have a biggish table of events. (5.3 million rows at the moment). I need to traverse this table mostly from the beginning to the end in a linear fashion. Mostly no random seeks. The data currently includes about 5 days of these events.
Due to the size of the table I need to paginate the results, and the internet tells me that "seek pagination" is the best method.
However this method works great and fast for traversing the first 3 days, after this mysql really begins to slow down. I've figured out it must be something io-bound as my cpu usage actually falls as the slowdown starts.
I do belive this has something to do with the 2-column sorting I do, and the usage of filesort, maybe Mysql needs to read all the rows to sort my results or something. Indexing correctly might be a proper fix, but I've yet been unable to find an index that solves my problem.
The compexifying part of this database is the fact that the ids and timestamps are NOT perfectly in order. The software requires the data to be ordered by timestamps. However when adding data to this database, some events are added 1 minute after they have actually happened, so the autoincremented ids are not in the chronological order.
As of now, the slowdown is so bad that my 5-day traversal never finishes. It just gets slower and slower...
I've tried indexing the table on multiple ways, but mysql does not seem to want to use those indexes and EXPLAIN keeps showing "filesort". Indexing is used on the where-statement though.
The workaround I'm currently using is to first do a full table traversal and load all the row ids and timestamps in memory. I sort the rows in the python side of the software and then load the full data in smaller chunks from mysql as I traverse (by ids only). This works fine, but is quite unefficient due to the total of 2 traversals of the same data.
The schema of the table:
CREATE TABLE `events` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`server` varchar(45) DEFAULT NULL,
`software` varchar(45) DEFAULT NULL,
`timestamp` bigint(20) DEFAULT NULL,
`data` text,
`event_type` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `index3` (`timestamp`,`server`,`software`,`id`),
KEY `index_ts` (`timestamp`)
) ENGINE=InnoDB AUTO_INCREMENT=7410472 DEFAULT CHARSET=latin1;
The query (one possible line):
SELECT software,
server,
timestamp,
id,
event_type,
data
FROM events
WHERE ( server = 'a58b'
AND ( software IS NULL
OR software IN ( 'ASD', 'WASD' ) ) )
AND ( timestamp, id ) > ( 100, 100 )
AND timestamp <= 200
ORDER BY timestamp ASC,
id ASC
LIMIT 100;
The query is based on https://blog.jooq.org/2013/10/26/faster-sql-paging-with-jooq-using-the-seek-method/ (and some other postings with the same idea). I belive it is called "seek pagination with seek predicate". The basic gist is that I have a starting timestamp and ending timestamp, and I need to get all the events with the software on the servers I've specifed OR only the server-specific events (software = NULL). The weirdish ( )-stuff is due tho python constructing the queries based on the parameters it is given. I left them visible if by a small chance they might have some effect.
I'm excepting the traversal to finish before the heat death of the universe.
First change
AND ( timestamp, id ) > ( 100, 100 )
to
AND (timestamp > 100 OR timestamp = 100 AND id > 100)
This optimisation is suggested in the official documentation: Row Constructor Expression Optimization
Now the engine will be able to use the index on (timestamp). Depending on cardinality of the columns server and software, that could be already fast enough.
An index on (server, timestamp, id) should improve the performance farther.
If still not fast enough, i would suggest a UNION optimization for
AND (software IS NULL OR software IN ('ASD', 'WASD'))
That would be:
(
SELECT software, server, timestamp, id, event_type, data
FROM events
WHERE server = 'a58b'
AND software IS NULL
AND (timestamp > 100 OR timestamp = 100 AND id > 100)
AND timestamp <= 200
ORDER BY timestamp ASC, id ASC
LIMIT 100
) UNION ALL (
SELECT software, server, timestamp, id, event_type, data
FROM events
WHERE server = 'a58b'
AND software = 'ASD'
AND (timestamp > 100 OR timestamp = 100 AND id > 100)
AND timestamp <= 200
ORDER BY timestamp ASC, id ASC
LIMIT 100
) UNION ALL (
SELECT software, server, timestamp, id, event_type, data
FROM events
WHERE server = 'a58b'
AND software = 'WASD'
AND (timestamp > 100 OR timestamp = 100 AND id > 100)
AND timestamp <= 200
ORDER BY timestamp ASC, id ASC
LIMIT 100
)
ORDER BY timestamp ASC, id ASC
LIMIT 100
You will need to create an index on (server, software, timestamp, id) for this query.
There are multiple complications going on.
The quick fix is
INDEX(software, timestamp, id) -- in this order
together with
WHERE server = 'a58b'
AND timestamp BETWEEN 100 AND 200
AND ( software IS NULL
OR software IN ( 'ASD', 'WASD' ) ) )
AND ( timestamp, id ) > ( 100, 100 )
ORDER BY timestamp ASC,
id ASC
LIMIT 100;
Note that server needs to be first in the index, not after the thing you are doing a range on (timestamp). Also, I broke out timestamp BETWEEN ... to make it clear to the optimizer that the next column of the ORDER BY might make use of the index.
You said "pagination", so I assume you have an OFFSET, too? Add it back in so we can discuss the implications. My blog on "remembering where you left off" instead of using OFFSET may (or may not) be practical.
Related
I'm having trouble understanding my options for how to optimize this specific query. Looking online, I find various resources, but all for queries that don't match my particular one. From what I could gather, it's very hard to optimize a query when you have an order by combined with a limit.
My usecase is that i would like to have a paginated datatable that displayed the latest records first.
The query in question is the following (to fetch 10 latest records):
select
`xyz`.*
from
xyz
where
`xyz`.`fk_campaign_id` = 95870
and `xyz`.`voided` = 0
order by
`registration_id` desc
limit 10 offset 0
& table DDL:
CREATE TABLE `xyz` (
`registration_id` int NOT NULL AUTO_INCREMENT,
`fk_campaign_id` int DEFAULT NULL,
`fk_customer_id` int DEFAULT NULL,
... other fields ...
`voided` tinyint unsigned NOT NULL DEFAULT '0',
PRIMARY KEY (`registration_id`),
.... ~12 other indexes ...
KEY `activityOverview` (`fk_campaign_id`,`voided`,`registration_id` DESC)
) ENGINE=InnoDB AUTO_INCREMENT=280614594 DEFAULT CHARSET=utf8 COLLATE=utf8_danish_ci;
The explain on the query mentioned gives me the following:
"id","select_type","table","partitions","type","possible_keys","key","key_len","ref","rows","filtered","Extra"
1,SIMPLE,db_campaign_registration,,index,"getTop5,winners,findByPage,foreignKeyExistingCheck,limitReachedIp,byCampaign,emailExistingCheck,getAll,getAllDated,activityOverview",PRIMARY,"4",,1626,0.65,Using where; Backward index scan
As you can see it says it only hits 1626 rows. But, when i execute it - then it takes 200+ seconds to run.
I'm doing this to fetch data for a datatable that is to display the latest 10 records. I also have pagination that allows one to navigate pages (only able to go to next page, not last or make any big jumps).
To further help with getting the full picture I've put together a dbfiddle. https://dbfiddle.uk/Jc_K68rj - this fiddle does not have the same results as my table. But i suspect this is because of the data size that I'm having with my table.
The table in question has 120GB data and 39.000.000 active records. I already have an index put in that should cover the query and allow it to fetch the data fast. Am i completely missing something here?
Another solution goes something like this:
SELECT b.*
FROM ( SELECT registration_id
FROM xyz
where `xyz`.`fk_campaign_id` = 95870
and `xyz`.`voided` = 0
order by `registration_id` desc
limit 10 offset 0 ) AS a
JOIN xyz AS b USING (registration_id)
order by `registration_id` desc;
Explanation:
The derived table (subquery) will use the 'best' query without any extra prompting -- since it is "covering".
That will deliver 10 ids
Then 10 JOINs to the table to get xyz.*
A derived table is unordered, so the ORDER BY does need repeating.
That's tricking the Optimizer into doing what it should have done anyway.
(Again, I encourage getting rid of any indexes that are prefixes of the the 3-column, optimal, index discussed.)
KEY `activityOverview` (`fk_campaign_id`,`voided`,`registration_id` DESC)
is optimal. (Nearly as good is the same index, but without the DESC).
Let's see the other indexes. I strongly suspect that there is at least one index that is a prefix of that index. Remove it/them. The Optimizer sometimes gets confused and picks the "smaller" index instead of the "better index.
Here's a technique for seeing whether it manages to read only 10 rows instead of most of the table: http://mysql.rjweb.org/doc.php/index_cookbook_mysql#handler_counts
I'm having some issues with performance for a MySql query for a chat application I'm in the process of building.
I'm trying to grab the most recent messages from a conversation. I'm testing with a table with approx 3 million rows in it (an export from an older version of the application). When loading from some conversations, it's quick. When loading from others, the query takes significantly longer.
Here's details on the table setup, it's an InnoDB table:
Column Type Comment
id int(10) unsigned Auto Increment
from int(10) unsigned NULL
to int(10) unsigned NULL
date int(10) unsigned NULL
message text NULL
read tinyint(1) NULL [0]
And here are the indexes I have:
PRIMARY id
INDEX from
INDEX to
INDEX date
This is an example of the current query that I'm running:
SELECT *
FROM `chat`
WHERE
(`from` =2 and `to` = 342)
OR
(`to` = 2 and `from` = 342)
ORDER BY `id` DESC
LIMIT 10
Now, when I run this query with this user combination (which only has a total of 325 rows in the database), it takes 1.5+ seconds.
However, if I use a different user combination which has a total of 12,000 rows in the database, like this:
SELECT *
FROM `chat`
WHERE
(`from` =2 and `to` = 10153)
OR
(`to` = 2 and `from` = 10153)
ORDER BY `id` DESC
LIMIT 10
Then the query runs in approximately 35-40 ms. Quite a big difference, and the opposite of what I would expect.
I'm sure I'm missing something here and would appreciate any help pointing me in the right direction for optimizing all of this.
it's not about how much records the user has. you have created one table for all chats which is an issue when you try to fetch first 10 records of a user have inserted entries recently will be served fatser.
Well, Another thing you can try is rather than using OR, use UNION which will give a little advantage.
Try to use this:
SELECT *
FROM `chat`
WHERE
(`from` =2 and `to` = 342)
UNION
SELECT *
FROM `chat`
WHERE
(`to` = 2 and `from` = 342)
ORDER BY `id` DESC
LIMIT 10
Time taken by query in your case will also depend on how long ago any user messaged.
For that you should change your model and not have all messages in a single table.
I have a large table with about 100 million records, with fields start_date and end_date, with DATE type. I need to check the number of overlaps with some date range, say between 2013-08-20 AND 2013-08-30, So I use.
SELECT COUNT(*) FROM myTable WHERE end_date >= '2013-08-20'
AND start_date <= '2013-08-30'
date column are indexed.
The important points is that the date ranges that I am searching for overlap are always in the future, while the main part of the records in the table are in the past (say about 97-99 million).
So, will this query be faster, if I add a column is_future - TINYINT, so, by checking only that condition like this
SELECT COUNT(*) FROM myTable WHERE is_future = 1
AND end_date >= '2013-08-20' AND start_date <= '2013-08-30'
it will exclude the rest 97 million or so records and will check the date condition for only the remaining 1-3 million records ?
I use MySQL
Thanks
EDIT
The mysql engine is innodb, but will matter considerably if it is say, MyISAM
here is the create table
CREATE TABLE `orders` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`title`
`start_date` date DEFAULT NULL,
`end_date` date DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=24 DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
EDIT 2 after #Robert Co answer
The partitioning looks like a good idea for this case, but it does not allow me to create partition based on is_future field unless I define it as primary key, otherwise I should remove my main primary key - id, which I can not do. So, if I define that field as primary key, then is there a meaning of partitioning, will not it be fast already if I search by is_future field which is primary key.
EDIT 3
The actual query where I need to use this is to select restaurant that have some free tables for that date range
SELECT r.id, r.name, r.table_count
FROM restaurants r
LEFT JOIN orders o
ON r.id = o.restaurant_id
WHERE o.id IS NULL
OR (r.table_count > (SELECT COUNT(*)
FROM orders o2
WHERE o2.restaurant_id = r.id AND
end_date >= '2013-08-20' AND start_date <= '2013-08-30'
AND o2.status = 1
)
)
SOLUTION
After a lot more research and testing the fastest way for counting the number of rows in my case was to just add one more condition, that start_date is more than current date (because the date ranges for search are always in the future)
SELECT COUNT(*) FROM myTable WHERE end_date >= '2013-09-01'
AND start_date >= '2013-08-20' AND start_date <= '2013-09-30'
also it is necessary to have one index - with start_date and end_date fields (thank you #symcbean).
As a result the execution time on table with 10m rows from 7 seconds - became 0.050 seconds.
SOLUTION 2 (#Robert Co)
partitioning in this case worked as well !! - perhaps it is better solution than indexing. Or they can both be applied together.
Thanks
This is a perfect use case for
table partitioning. If the Oracle INTERVAL feature makes it to MySQL, then it will just add to the awesomeness.
date column are indexed
What type of index? A hash based index is no use for range queries. If it's not a BTREE index then change it now. And you've not shown us *how they are indexed. Are both columns in the same index? Is there other stuff in there too? What order (end_date must appear as the first column)?
There are implicit type conversions in the script - this should be handled automatically by the optimizer, but it's worth checking....
SELECT COUNT(*) FROM myTable WHERE end_date >= 20130820000000
AND start_date <= 20130830235959
if I add a column is_future - TINYINT
First, in order to be of any use, this would require that the future dates be a small proportion of the total data stored in the table (less than 10%). And that's just to make it more efficient than a full table scan.
Secondly, it's going to require very frequent updates to the index to maintain it, which in addition to the overhead of initial populatiopn is likely to lead to fragmentation of the index and degraded performance (depending on how the iondex is constructed).
Thirdly, if this still has to process 3 million rows of data (and specifically, via an index lookup) then it's going to be very slow even with the data pegged in memory.
Further, the optimizer is never likely to use this index without being forced to (due to the low cardinality).
I have done a simple test, just created an index on the tinyint column. The structures may not be the same, but with an index it seems to work.
http://www.sqlfiddle.com/#!2/514ab/1/0
and for count
http://www.sqlfiddle.com/#!2/514ab/2/0
View execution plan there to see that the select just scans one row which means it would process only the lesser number of records in your case.
So the simple answer is yes, with an index it would work.
I have a database of measurements that indicate a sensor, a reading, and the timestamp the reading was taken. The measurements are only recorded when there's a change. I want to generate a result set that shows the range each sensor is reading a particular measurement.
The timestamps are in milliseconds but I'm outputting the result in seconds.
Here's the table:
CREATE TABLE `raw_metric` (
`row_id` BIGINT NOT NULL AUTO_INCREMENT,
`sensor_id` BINARY(6) NOT NULL,
`timestamp` BIGINT NOT NULL,
`angle` FLOAT NOT NULL,
PRIMARY KEY (`row_id`)
)
Right now I'm getting the results I want using a subquery, but it's fairly slow when there's a lot of datapoints:
SELECT row_id,
HEX(sensor_id),
angle,
(
COALESCE((
SELECT MIN(`timestamp`)
FROM raw_metric AS rm2
WHERE rm2.`timestamp` > rm1.`timestamp`
AND rm2.sensor_id = rm1.sensor_id
), UNIX_TIMESTAMP() * 1000) - `timestamp`
) / 1000 AS duration
FROM raw_metric AS rm1
Essentially, to get the range, I need to get the very next reading (or use the current time if there isn't another reading). The subquery finds the minimum timestamp that is later than the current one but is from the same sensor.
This query isn't going to occur very often so I'd prefer to not have to add an index on the timestamp column and slow down inserts. I was hoping someone might have a suggestion as to an alternate way of doing this.
UPDATE:
The row_id's should be incremented along with timestamps but it can't be guaranteed due to network latency issues. So, it's possible that an entry with a lower row_id comes occurs AFTER a later row_id, though unlikely.
This is perhaps more appropriate as a comment than as a solution, but it is too long for a comment.
You are trying to implement the lead() function in MySQL, and MySQL does not, unfortunately, have window functions. You could switch to Oracle, DB2, Postgres, SQL Server 2012 and use the built-in (and optimized) functionality there. Ok, that may not be realistic.
So, given your data structure you need to do either a correlated subquery or a non-equijoin (actually a partial equi-join because there is match on sensor_id). These are going to be expensive operations, unless you add an index. Unless you are adding measurements tens of times per second, the additional overhead on the index should not be a big deal.
You could also change your data structure. If you had a "sensor counter" that was a sequential number enumerating the readings, then you could use this as an equijoin (although for good performance you might still want an index). Adding this in to your table would require having a trigger -- and that is likely to perform even worse than an index for when inserting.
If you only have a handful of sensors, you could create a separate table for each one. Oh, I can feel the groans at this suggestion. But, if you did, then an auto-incremented id would perform the same role. To be honest, I would only do this if I could count the number of sensors on each hand.
In the end, I might suggest that you take the hit during insertion and have "effective" and "end' times on each record (as well as an index on sensor id and either timestamp or id). With these additional columns, you will probably find more uses for the table.
If you are doing this for just one sensor, then create a temoprary table for the information and use an auto-incremented id column. Then insert the data into it:
insert into temp_rawmetric (orig_row_id, sensor_id, timestamp, angle)
select orig_row_id, sensor_id, timestamp, angle
from raw_metric
order by sensor_id, timestamp;
Be sure your table has a temp_rawmetric_id column that is auto-incremented and the primary key (creates an index automatically). The order by makes sure this is incremented according to the timestamp.
Then you can do your query as:
select trm.sensor_id, trm.angle,
trm.timestamp as startTime, trmnext.timestamp as endTime
from temp_rawmetric trm left outer join
temp_rawmetric trmnext
on trmnext.temp_rawmetric_id = trm.temp_rawmetric_id+1;
This will require a pass through the original data to extra the data, and then a primary key join on the temporary table. The first might take some time. The second should be pretty quick.
Select rm1.row_id
,HEX(rm1.sensor_id)
,rm1.angle
,(COALESCE(rm2.timestamp, UNIX_TIMESTAMP() * 1000) - rm1.timestamp) as duration
from raw_metric rm1
left outer join
raw_metric rm2
on rm2.sensor_id = rm1.sensor_id
and rm2.timestamp = (
select min(timestamp)
from raw_metric rm3
where rm3.sensor_id = rm1.sensor_id
and rm3.timestamp > rm1.timestamp
)
If you use auto_increment for primary key, you may replace timestamp by row_id in query condition part. Like this:
SELECT row_id,
HEX(sensor_id),
angle,
(
COALESCE((
SELECT MIN(`timestamp`)
FROM raw_metric AS rm2
WHERE rm2.`row_id` > rm1.`row_id`
AND rm2.sensor_id = rm1.sensor_id
), UNIX_TIMESTAMP() * 1000) - `timestamp`
) / 1000 AS duration
FROM raw_metric AS rm1
It must work some quickly.
Also you can add one more subquery for fast select row id of new senser value. See:
SELECT row_id,
HEX(sensor_id),
angle,
(
COALESCE((
SELECT timestamp FROM raw_metric AS rm1a
WHERE row_id =
(
SELECT MIN(`row_id`)
FROM raw_metric AS rm2
WHERE rm2.`row_id` > rm1.`row_id`
AND rm2.sensor_id = rm1.sensor_id
)
), UNIX_TIMESTAMP() * 1000) - `timestamp`
) / 1000 AS duration
FROM raw_metric AS rm1
I have precomputet some similarities (about 70 million) and want to find the similarities from one track to all other tracks. I only need the top-100-tracks that have the highest similarities. For my calculations i do this query about 15'000 times with different tracks as input. After a boot of the machine one calculation needs over 600 seconds for all 15k queries. After several runs, mysql has - i think - cached the indices so the complete run needs about 15 seconds. My only worries are: i have a very hight "Handler_read_rnd_nextDokumentation" value.
I have a MySQL table with this structure:
CREATE TABLE `similarity` (
`similarityID` int(11) NOT NULL AUTO_INCREMENT,
`trackID1` int(11) NOT NULL,
`trackID2` int(11) NOT NULL,
`tracksim` double DEFAULT NULL,
`timesim` double DEFAULT NULL,
`tagsim` double DEFAULT NULL,
`simsum` double DEFAULT NULL,
PRIMARY KEY (`similarityID`),
UNIQUE KEY `trackID1` (`trackID1`,`trackID2`),
KEY `trackID1sum` (`trackID1`,`simsum`),
KEY `trackID2sum` (`trackID2`,`simsum`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
I want to do very much queries on this. The queries look like this:
// simsum is a sum over tracksim, timesim, tagsim
(
SELECT similarityID, trackID2, tracksim, timesim, tagsim, simsum
FROM similarity
WHERE trackID1 = 512
ORDER BY simsum DESC
LIMIT 0,100
)
UNION
(
SELECT similarityID, trackID1, tracksim, timesim, tagsim, simsum
FROM similarity
WHERE trackID2 = 512
ORDER BY simsum DESC
LIMIT 0,100
)
ORDER BY simsum DESC
LIMIT 0,100
The query is quite fast and under 0.1 sec (previous question) but i'm worried about the very huge number in the status page. I thought i have set every index that i'm using in the query.
Handler_read_rndDokumentation 88,0 M
Handler_read_rnd_nextDokumentation 20,0 G
Is there anything "wrong"? Could i get the query even faster? Do i have to worry about the 20G ?
Thanks in advance
The first thing which is obviously wrong here is that you seem to be calculating a directional relationship between tuples - if f(a,b)===f(b,a) then you could simplify your system a lot by swapping around track1 and track2 where track1 is greater than track2 but retaining the existing primary key (and ignore collisions).
You're only halving the amount of data - so it won't be a huge performance increase.
There may be further scope for improving the performance but this is very much dependant on how frequently the data changes, more specifically, you should prune the records where similarity is not in the top 100.