I have an sql query to select randomly 1200 top retweeted tweets at least 50 times retweeted and the tweetDate should be 4 days older from 40 million records. The query I pasted below works but It takes 40 minutes, so is there any faster version of that query?
SELECT
originalTweetId, Count(*) as total, tweetContent, tweetDate
FROM
twitter_gokhan2.tweetentities
WHERE
originalTweetId IS NOT NULL
AND originalTweetId <> - 1
AND isRetweet = true
AND (tweetDate < DATE_ADD(CURDATE(), INTERVAL - 4 DAY))
GROUP BY originalTweetId
HAVING total > 50
ORDER BY RAND()
limit 0 , 1200;
---------------------------------------------------------------
Table creation sql is like:
CREATE TABLE `tweetentities` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`tweetId` bigint(20) NOT NULL,
`tweetContent` varchar(360) DEFAULT NULL,
`tweetDate` datetime DEFAULT NULL,
`userId` bigint(20) DEFAULT NULL,
`userName` varchar(100) DEFAULT NULL,
`retweetCount` int(11) DEFAULT '0',
`keyword` varchar(500) DEFAULT NULL,
`isRetweet` bit(1) DEFAULT b'0',
`isCompleted` bit(1) DEFAULT b'0',
`applicationId` int(11) DEFAULT NULL,
`latitudeData` double DEFAULT NULL,
`longitude` double DEFAULT NULL,
`originalTweetId` bigint(20) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `index` (`originalTweetId`),
KEY `index3` (`applicationId`),
KEY `index2` (`tweetId`),
KEY `index4` (`userId`),
KEY `index5` (`userName`),
KEY `index6` (`isRetweet`),
KEY `index7` (`tweetDate`),
KEY `index8` (`originalTweetId`),
KEY `index9` (`isCompleted`),
KEY `index10` (`tweetContent`(191))
) ENGINE=InnoDB AUTO_INCREMENT=41501628 DEFAULT CHARSET=utf8mb4$$
You are, of course, summarizing a huge number of records, then randomizing them. This kind of thing is hard to make fast. Going back to the beginning of time makes it worse. Searching on a null condition just trashes it.
If you want this to perform reasonably, you must get rid of the IS NOT NULL selection. Otherwise, it will perform badly.
But let us try to find a reasonable solution. First, let's get the originalTweetId values we need.
SELECT MIN(id) originalId,
MIN(tweetDate) tweetDate,
originalTweetId,
Count(*) as total
FROM twitter_gokhan2.tweetentities
WHERE originalTweetId <> -1
/*AND originalTweetId IS NOT NULL We have to leave this out for perf reasons */
AND isRetweet = true
AND tweetDate < CURDATE() - INTERVAL 4 DAY
AND tweetDate > CURDATE() - INTERVAL 30 DAY /*let's add this, if we can*/
GROUP BY originalTweetId
HAVING total >= 50
This summary query gives us the lowest id number and date in your database for each subject tweet.
To get this to run fast, we need a compound index on (originalTweetId, isRetweet, tweetDate, id). The query will do a range scan of this index on tweetDate, which is about as fast as you can hope for. Debug this query, both for correctness and performance, then move on.
Now do the randomization. Let's do this with the minimum amount of data we can, to avoid sorting some enormous amount of stuff.
SELECT originalTweetId, tweetDate, total, RAND() AS randomOrder
FROM (
SELECT MIN(id) originalId,
MIN(tweetDate) tweetDate
originalTweetId,
Count(*) as total
FROM twitter_gokhan2.tweetentities
WHERE originalTweetId <> -1
/*AND originalTweetId IS NOT NULL We have to leave this out for perf reasons */
AND isRetweet = true
AND tweetDate < CURDATE() - INTERVAL 4 DAY
AND tweetDate > CURDATE() - INTERVAL 30 DAY /*let's add this, if we can*/
GROUP BY originalTweetId
HAVING total >= 50
) AS retweets
ORDER BY randomOrder
LIMIT 1200
Great. Now we have a list of 1200 tweet ids and dates in random order. Now let's go get the content.
SELECT a.originalTweetId, a.total, b.tweetContent, a.tweetDate
FROM (
/* that whole query above */
) AS a
JOIN twitter_gokhan2.tweetentities AS b ON (a.id = b.id)
ORDER BY a.randomOrder
See how this goes? Use a compound index to do your summary, and do it on the minimum amount of data. Then do the randomizing, then go fetch the extra data you need.
You're selecting a huge number of records by selecting every record older than 4 days old....
Since the query takes a huge amount of time, why not simply prepare the results using an independant script which runs repeatedly in the background....
You might be able to make the assumption that if its a retweet, the originalTweetId cannot be null/-1
Just to clarify... did you really mean to query everything OLDER than 4 days???
AND (tweetDate < DATE_ADD(CURDATE(), INTERVAL - 4 DAY))
OR... Did you mean you only wanted to aggregate RECENT TWEETS WITHIN the last 4 days. To me, tweets that happened 2 years ago would be worthless to current events... If thats the case, you might be better to just change to
AND (tweetDate >= DATE_ADD(CURDATE(), INTERVAL - 4 DAY))
See if this isn't a bit faster than 40 minutes:
Test first without the commented lines, then re-add them to compare performance impact. (especially ORDER BY RAND() is known to be horrible)
SELECT
originalTweetId,
total,
-- tweetContent, -- may slow things somewhat
tweetDate
FROM (
SELECT
originalTweetId,
COUNT(*) AS total,
-- tweetContent, -- may slow things somewhat
MIN(tweetDate) AS tweetDate,
MAX(isRetweet) AS isRetweet
FROM twitter_gokhan2.tweetentities
GROUP BY originalTweetId
) AS t
WHERE originalTweetId > 0
AND isRetweet
AND tweetDate < DATE_ADD(CURDATE(), INTERVAL - 4 DAY)
AND total > 50
-- ORDER BY RAND() -- very likely to slow performance,
-- test with and without...
LIMIT 0, 1200;
PS - originalTweetId should be indexed hopefully
Related
My table is defined as following:
CREATE TABLE `tracking_info` (
`tid` int(25) NOT NULL AUTO_INCREMENT,
`tracking_customer_id` int(11) NOT NULL DEFAULT '0',
`tracking_content` text NOT NULL,
`tracking_type` int(11) NOT NULL DEFAULT '0',
`time_recorded` int(25) NOT NULL DEFAULT '0',
PRIMARY KEY (`tid`),
KEY `time_recorded` (`time_recorded`),
KEY `tracking_idx` (`tracking_customer_id`,`tracking_type`,
`time_recorded`,`tid`)
) ENGINE=MyISAM
The table contains about 150 million records. Here is the query:
SELECT tracking_content, tracking_type, time_recorded
FROM tracking_info
WHERE FROM_UNIXTIME(time_recorded) > DATE_SUB( NOW( ) ,
INTERVAL 90 DAY )
AND tracking_customer_id = 111111
ORDER BY time_recorded DESC
LIMIT 0,10
It takes about a minute to run the query even without ORDER BY. Any thoughts? Thanks in advance!
First, refactor the query so it's sargable.
SELECT tracking_content, tracking_type, time_recorded
FROM tracking_info
WHERE time_recorded > UNIX_TIMESTAMP(DATE_SUB( NOW( ) , INTERVAL 90 DAY )
AND tracking_customer_id = 111111
ORDER BY time_recorded DESC
LIMIT 0,10;
Then add this multi-column index:
ALTER TABLE tracking_info
ADD INDEX cust_time (tracking_customer_id, time_recorded DESC);
Why will this help?
It compares the raw data in a column with a constant, rather than using the FROM_UNIXTIME() function to transform all the data in that column of the table. That makes the query sargable.
The query planner can random-access the index I suggest to the first eligible row, then read ten rows sequentially from the index and look up what it needs from the table, then stop.
You can rephrase the query to isolate time_recorded, as in:
SELECT tracking_content, tracking_type, time_recorded
FROM tracking_info
WHERE time_recorded > UNIX_TIMESTAMP(DATE_SUB(NOW(), INTERVAL 90 DAY))
AND tracking_customer_id = 111111
ORDER BY time_recorded DESC
LIMIT 0,10
Then, the following index will probably make the query faster:
create index ix1 on tracking_info (tracking_customer_id, time_recorded);
There are 3 things to do:
Change to InnoDB.
Add INDEX(tracking_customer_id, time_recorded)
Rephrase to time_recorded > NOW() - INTERVAL 90 DAY)
Non-critical notes:
int(25) -- the "25" has no meaning. You get a 4-byte signed number regardless.
There are datatypes DATETIME and TIMESTAMP; consider using one of them instead of an INT that represents seconds since sometime. (It would be messy to change, so don't bother.)
When converting to InnoDB, the size on disk will double or triple.
I have a system that checks websites for certain data at set frequencies. Each website has its own check frequency in the crawl_frequency column. This value is in days.
I have a table like this
CREATE TABLE `websites` (
`id` BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT,
`domain` VARCHAR(191) NOT NULL COLLATE 'utf8mb4_unicode_ci',
`crawl_frequency` TINYINT(3) UNSIGNED NOT NULL DEFAULT '3',
`last_crawled_start` TIMESTAMP NULL DEFAULT NULL,
PRIMARY KEY (`id`)
)
I want to run queries to find new websites to check at their specified check frequency/interval. At the moment I have this query which works fine if the crawl_frequency for a website is set to one day.
SELECT domain
FROM websites
WHERE last_crawled_start <= (now() - INTERVAL 1 DAY)
LIMIT 1
Is there any way in a MySQL query I can use the value that is in the crawl_frequency column for each row in the WHERE clause.
So example I'd like to do something like:
SELECT domain
FROM websites
WHERE last_crawled_start <= (now() - INTERVAL {{INSERT VALUE OF CRAWL FREQUENCY FOR THIS PARTICULAR WEBSITE}} DAY)
LIMIT 1
You can do it like so:
SELECT domain
FROM websites
WHERE last_crawled_start <= NOW() - INTERVAL crawl_frequency DAY
LIMIT 1
Yes, really.
You can try to use DATEDIFF function, like this:
SELECT domain FROM websites
WHERE DATEDIFF(NOW(), last_crawled_start) > crawl_frequency
LIMIT 1;
Everything i read for mysql said it can't be variable, but you can use another function e.g.
SELECT * FROM websites
WHERE
(unix_timestamp() - unix_timestamp(last_crawled_start))/86400.0 > crawl_frequency
I have a MySQL MyISAM table whose structure is as below.
CREATE TABLE `VIEW_LOG_TRIGGER_TEMP` (
`ID` mediumint(9) NOT NULL AUTO_INCREMENT,
`VIEWER` int(10) unsigned NOT NULL DEFAULT '0',
`VIEWED` int(10) unsigned NOT NULL DEFAULT '0',
`DATE` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`SEEN` char(1) NOT NULL,
PRIMARY KEY (`ID`),
UNIQUE KEY `VIEWED` (`VIEWED`,`VIEWER`),
KEY `VIEWER` (`VIEWER`),
KEY `DATE` (`DATE`)
) ENGINE=MyISAM
The contents of the table are as follows:
I want to get the records between two dates and in each result set I want two records. Executing the following queries whose result set is shown in the attached image:
SELECT * FROM test.VIEW_LOG_TRIGGER_TEMP WHERE DATE >= '2018-02-11 00:00:00' AND DATE < '2018-02-12 00:00:00' LIMIT 2;
SELECT * FROM test.VIEW_LOG_TRIGGER_TEMP WHERE DATE >= '2018-02-11 00:00:00' AND DATE < '2018-02-12 00:00:00' LIMIT 2,2;
SELECT * FROM test.VIEW_LOG_TRIGGER_TEMP WHERE DATE >= '2018-02-11 00:00:00' AND DATE < '2018-02-12 00:00:00' LIMIT 4,2;
As we can see the record with ID 3 is not being fetched. The expected result is all records from the ample should be fetched. If I execute the same queries with the Order by condition on DATE as ORDER BY DATE ASC, I get the required result.
Is it that in each fetch we get a new result set on which order is applied?
The problem is the "LIMIT 4,2" and "LIMIT 2,2", it means you don't want all the results
To explain more :
LIMIT 4,2 means you want only 2 results starting from the 4th
And the best explanation for the LIMIT and ORDER optimization from MYSQL website limit optimization
You used ranged Query for Date and your ID is Primary Key. You will get the output based on the Increasing Date and Time Order.
So the order will be in this order if no limit is used
Id
4
2
1
5
3
Limit 2 for Query 1 -> you will get ID's 4 and 2
Limit 2,2 for Query 2-> Here your offset is 2 so it will skip first two result ie.(4 and 2) and It will print ID's 1 and 5
For Query 3 -> Try Limit 4,1 you will get Id 3
Imagine I have a table like this:
CREATE TABLE `Alarms` (
`AlarmId` INT UNSIGNED NOT NULL AUTO_INCREMENT
COMMENT "32-bit ID",
`Ended` BOOLEAN NOT NULL DEFAULT FALSE
COMMENT "Whether the alarm has ended",
`StartedAt` TIMESTAMP NOT NULL DEFAULT 0
COMMENT "Time at which the alarm was raised",
`EndedAt` TIMESTAMP NULL
COMMENT "Time at which the alarm ended (NULL iff Ended=false)",
PRIMARY KEY (`AlarmId`),
KEY `Key4` (`StartedAt`),
KEY `Key5` (`Ended`, `EndedAt`)
) ENGINE=InnoDB;
Now, for a GUI, I want to produce:
a list of days during which at least one alarm were "active"
for each day, how many alarms started
for each day, how many alarms ended
The intent is to present users with a dropdown box from which they can choose a date to see any alarms active (started before or during, and ended during or after) on that day. So something like this:
+-----------------------------------+
| Choose day ▼ |
+-----------------------------------+
| 2017-12-03 (3 started) |
| 2017-12-04 (1 started, 2 ended) |
| 2017-12-05 (2 ended) |
| 2017-12-16 (1 started, 1 ended) |
| 2017-12-17 (1 started) |
| 2017-12-18 |
| 2017-12-19 |
| 2017-12-20 |
| 2017-12-21 (1 ended) |
+-----------------------------------+
I will probably force an age limit on alarms so that they are archived/removed after, say, a year. So that's the scale we're working with.
I expect anywhere from zero to tens of thousands of alarms per day.
My first thought was a reasonably simple:
(
SELECT
COUNT(`AlarmId`) AS `NumStarted`,
NULL AS `NumEnded`,
DATE(`StartedAt`) AS `Date`
FROM `Alarms`
GROUP BY `Date`
)
UNION
(
SELECT
NULL AS `NumStarted`,
COUNT(`AlarmId`) AS `NumEnded`,
DATE(`EndedAt`) AS `Date`
FROM `Alarms`
WHERE `Ended` = TRUE
GROUP BY `Date`
);
This uses both of my indexes, with join type ref and ref type const, which I'm happy with. I can iterate over the resultset, dumping the non-NULL values found into a C++ std::map<boost::gregorian::date, std::pair<size_t, size_t>> (then "filling the gaps" for days on which no alarms started or ended, but were active from previous days).
The spanner I'm throwing in the works is that the list should take into account location-based timezones, but only my application knows about timezones. For logistical reasons, the MySQL session is deliberately SET time_zone = '+00:00' so that timestamps are all kicked out in UTC. (Various other tools are then used to perform any necessary location-specific corrections for historical timezones, taking into account DST and whatnot.) For the rest of the application this is great, but for this particular query it breaks the date GROUPing.
Maybe I could pre-calculate (in my application) a list of time ranges, and generate a huge query of 2n UNIONed queries (where n = number of "days" to check) and get the NumStarted and NumEnded counts that way:
-- Example assuming desired timezone is -05:00
--
-- 3rd December
(
SELECT
COUNT(`AlarmId`) AS `NumStarted`,
NULL AS `NumEnded`,
'2017-12-03' AS `Date`
FROM `Alarms`
-- Alarm started during 3rd December UTC-5
WHERE `StartedAt` >= '2017-12-02 19:00:00'
AND `StartedAt` < '2017-12-03 19:00:00'
GROUP BY `Date`
)
UNION
(
SELECT
NULL AS `NumStarted`,
COUNT(`AlarmId`) AS `NumEnded`,
'2017-12-03' AS `Date`
FROM `Alarms`
-- Alarm ended during 3rd December UTC-5
WHERE `EndedAt` >= '2017-12-02 19:00:00'
AND `EndedAt` < '2017-12-03 19:00:00'
GROUP BY `Date`
)
UNION
-- 4th December
(
SELECT
COUNT(`AlarmId`) AS `NumStarted`,
NULL AS `NumEnded`,
'2017-12-04' AS `Date`
FROM `Alarms`
-- Alarm started during 4th December UTC-5
WHERE `StartedAt` >= '2017-12-03 19:00:00'
AND `StartedAt` < '2017-12-04 19:00:00'
GROUP BY `Date`
)
UNION
(
SELECT
NULL AS `NumStarted`,
COUNT(`AlarmId`) AS `NumEnded`,
'2017-12-04' AS `Date`
FROM `Alarms`
-- Alarm ended during 4th December UTC-5
WHERE `EndedAt` >= '2017-12-03 19:00:00'
AND `EndedAt` < '2017-12-04 19:00:00'
GROUP BY `Date`
)
UNION
-- 5th December
-- [..]
But, of course, even if I'm restricting the database to a year's worth of historical alarms, that's up to like 730 UNIONd SELECTs. My spidey senses tell me that this is a very bad idea.
How else can I generate these sort of time-grouped statistics? Or is this really silly and I should look at resolving the problems preventing me from using tzinfo with MySQL?
Must work on MySQL 5.1.73 (CentOS 6) and MariaDB 5.5.50 (CentOS 7).
The UNION approach is actually not far off a viable solution; you can achieve the same thing, without a catastrophically large query, by recruiting a temporary table:
CREATE TEMPORARY TABLE `_ranges` (
`Start` TIMESTAMP NOT NULL DEFAULT 0,
`End` TIMESTAMP NOT NULL DEFAULT 0,
PRIMARY KEY (`Start`, `End`)
);
INSERT INTO `_ranges` VALUES
-- 3rd December UTC-5
('2017-12-02 19:00:00', '2017-12-03 19:00:00'),
-- 4th December UTC-5
('2017-12-03 19:00:00', '2017-12-04 19:00:00'),
-- 5th December UTC-5
('2017-12-04 19:00:00', '2017-12-05 19:00:00'),
-- etc.
;
-- Now the queries needed are simple and also quick:
SELECT
`_ranges`.`Start`,
COUNT(`AlarmId`) AS `NumStarted`
FROM `_ranges` LEFT JOIN `Alarms`
ON `Alarms`.`StartedAt` >= `_ranges`.`Start`
ON `Alarms`.`StartedAt` < `_ranges`.`End`
GROUP BY `_ranges`.`Start`;
SELECT
`_ranges`.`Start`,
COUNT(`AlarmId`) AS `NumEnded`
FROM `_ranges` LEFT JOIN `Alarms`
ON `Alarms`.`EndedAt` >= `_ranges`.`Start`
ON `Alarms`.`EndedAt` < `_ranges`.`End`
GROUP BY `_ranges`.`Start`;
DROP TABLE `_ranges`;
(This approach was inspired by a DBA.SE post.)
Notice that there are two SELECTs — the original UNION is no longer possible, because temporary tables cannot be accessed twice in the same query. However, since we've already introduced additional statements anyway (the CREATE, INSERT and DROP), this seems to be a moot problem in the circumstances.
In both cases, each row represents one of our requested periods, and the first column equals the "start" part of the period (so that we can identify it in the resultset).
Be sure to use exception handling in your code as needed to ensure that _ranges is DROPped before your routine returns; although the temporary table is local to the MySQL session, if you're continuing to use that session afterwards then you probably want a clean state, particularly if this function is going to be used again.
If this is still too heavy, for example because you have many time periods and the CREATE TEMPORARY TABLE itself will therefore become too large, or because multiple statements doesn't fit in your calling code, or because your user doesn't have permission to create and drop temporary tables, you'll have to fall back on a simple GROUP BY over DAY(Date), and ensure that your users run mysql_tzinfo_to_sql whenever the system's tzdata is updated.
We have a logging table which is growing as new events happening. At the moment we have around 120.000 rows of log events stored.
The events table looks like this:
'CREATE TABLE `EVENTS` (
`ID` int(11) NOT NULL AUTO_INCREMENT,
`EVENT` varchar(255) NOT NULL,
`ORIGIN` varchar(255) NOT NULL,
`TIME_STAMP` TIMESTAMP NOT NULL,
`ADDITIONAL_REMARKS` json DEFAULT NULL,
PRIMARY KEY (`ID`)
) ENGINE=InnoDB AUTO_INCREMENT=137007 DEFAULT CHARSET=utf8'
Additional_Remarks is a JSON field because different applications log into this table and can add more information to the event which happened. I did not want to put any data structure here, because this information can be different. For example one project management application can log:
ID, "new task created", "app", NOW(), {"project": {"id": 1}, "creator": {"id": 1}}
While other applications do not have projects or creator, but maybe cats and owners they want to store in the Additional_Remarks field.
Queries can use the Additional_Remarks field to filter information for one specific application like:
SELECT
DISTINCT(ADDITIONAL_REMARKS->"$.project.id") as 'project',
COUNT(CASE WHEN EVENT = 'new task created' THEN 1 END) AS 'new_task'
FROM EVENTS
WHERE DATE(TIMESTAMP) >= DATE(NOW()) - INTERVAL 30 DAY
AND ORIGIN = "app"
GROUP BY project
ORDER BY new_task DESC
LIMIT 10;
Output EXPLAIN query:
'1', 'SIMPLE', 'EVENTS', NULL, 'ALL', NULL, NULL, NULL, NULL, '136459', '100.00', 'Using where; Using temporary; Using filesort'
With this query I get the top 10 projects with the most created tasks for the last 30 days. Works fine, but this queries get slower and slower as our table grows. With 120.000 rows this query needs over 30 seconds.
Do you know any way to improve the speed? The newest information in the table with the highest id is more important then older entries. Often I look only for entries which happened in the last X days. It would be useful to stop the query after the first entry is older as X days from the where clause, as all further entries are even older.
if TIME_STAMP is indexed, the DATE function will not allow the index to be used because it is non-deterministic.
WHERE DATE(TIMESTAMP) >= DATE(NOW()) - INTERVAL 30 DAY
Can be rewritten as.
WHERE TIMESTAMP >= UNIX_TIMESTAMP(DATE(NOW()) - INTERVAL 30 DAY)
Do you know any way to improve the speed?
The only way i can see to speed up the query is to multicolumn index TIMESTAMP and ORIGIN like so ALTER TABLE EVENTS ADD KEY timestamp_origin (TIME_STAMP, ORIGIN); and mine query adjustment above
EDIT
And a delivered table may improve query speed because it will use the new index.
SELECT
ADDITIONAL_REMARKS->"$.project.id" AS 'project',
COUNT(CASE WHEN EVENT = 'new task created' THEN 1 END) AS 'new_task'
FROM (
SELECT
*
FROM EVENTS
WHERE
TIME_STAMP >= UNIX_TIMESTAMP(DATE(NOW()) - INTERVAL 30 DAY)
AND
ORIGIN = "app"
)
AS events_within_30_days
GROUP BY project
ORDER BY new_task DESC
LIMIT 10;
A inner select where I already reduce the amount of rows could reduce the query time from 30 sec to 0.05 sec.
It looks like:
SELECT
ADDITIONAL_REMARKS->"$.project.id" AS 'project',
COUNT(CASE WHEN EVENT = 'new task created' THEN 1 END) AS 'new_task'
FROM (
SELECT *
FROM EVENTS WHERE
EVENT = 'new task created'
AND TIME_STAMP >= UNIX_TIMESTAMP(DATE(NOW()) - INTERVAL 30 DAY)
AND ORIGIN = "app" ) AS events_within_30_days
GROUP BY project
ORDER BY new_task DESC
LIMIT 10;