MySQL index barely speeding up simple query - mysql

I have this table, that contains around 80,000,000 rows.
CREATE TABLE `mytable` (
`date` date NOT NULL,
`parameters` mediumint(8) unsigned NOT NULL,
`num` tinyint(3) unsigned NOT NULL,
`val1` int(11) NOT NULL,
`val2` int(10) NOT NULL,
`active` tinyint(3) unsigned NOT NULL,
`ref` int(10) unsigned NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`ref`) USING BTREE,
KEY `parameters` (`parameters`)
) ENGINE=MyISAM AUTO_INCREMENT=79092001 DEFAULT CHARSET=latin1
it's articulated around 2 main columns: "parameters" and "date".
there are around 67,000 possible values for "parameters"
for each "parameters" there are around 1200 rows, each with a different date.
so for each date, there are 67,000 rows.
1200 * 67,000 = 80,400,000.
table size appears as 1.5GB, index size 1.4GB.
now, I want to query the table to retrieve all rows of one "parameters"
(actually I want to do it for each parameter, but this is a good start)
SELECT val1 FROM mytable WHERE parameters=1;
the first run gives me results in 8 seconds
subsequent runs for different but close values of parameters (2, 3, 4...) are instantaneous
a run for a "far away" value (parameters=1000) gives me results in 8 seconds again.
I did tests running the same query without the index, and got results in 20 seconds, so I guess the index is kicking in as shown by EXPLAIN, but not giving a drastic jump in performances:
+----+-------------+----------+------+---------------+------------+---------+-------+------+-------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+----------+------+---------------+------------+---------+-------+------+-------+
| 1 | SIMPLE | mytable | ref | parameters | parameters | 3 | const | 1097 | |
+----+-------------+----------+------+---------------+------------+---------+-------+------+-------+
but I'm still baffled by the time for such and easy request (no join, directly on the index).
the server is 2 years-old 2 cpu quad core 2.6GHz running Ubuntu, with 4G of RAM.
I've raised the key_buffer parameter to 1G, and have restarted mysql, but noticed no change whatsoever.
should I consider this normal ? or is there something I'm doing wrong ? I get the feeling with the right config the request should be almost immediate.

Try using a covering index, i.e. create an index that includes both of the columns you need. It won't need the second disk I/O to fetch the values from the main table since the data's right there in the index.

Related

MySQL Date Range Query Optimization

I have a MySQL table structured like this:
CREATE TABLE `messages` (
`id` int NOT NULL AUTO_INCREMENT,
`author` varchar(250) COLLATE utf8mb4_unicode_ci NOT NULL,
`message` varchar(2000) COLLATE utf8mb4_unicode_ci NOT NULL,
`serverid` varchar(200) COLLATE utf8mb4_unicode_ci NOT NULL,
`date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`guildname` varchar(1000) COLLATE utf8mb4_unicode_ci NOT NULL,
PRIMARY KEY (`id`,`date`)
) ENGINE=InnoDB AUTO_INCREMENT=27769461 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
I need to query this table for various statistics using date ranges for Grafana graphs, however all of those queries are extremely slow, despite the table being indexed using a composite key of id and date.
"id" is auto-incrementing and date is also always increasing.
The queries generated by Grafana look like this:
SELECT
UNIX_TIMESTAMP(date) DIV 120 * 120 AS "time",
count(DISTINCT(serverid)) AS "servercount"
FROM messages
WHERE
date BETWEEN FROM_UNIXTIME(1615930154) AND FROM_UNIXTIME(1616016554)
GROUP BY 1
ORDER BY UNIX_TIMESTAMP(date) DIV 120 * 120
This query takes over 30 seconds to complete with 27 million records in the table.
Explaining the query results in this output:
+----+-------------+----------+------------+------+---------------+------+---------+------+----------+----------+-----------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+----------+------------+------+---------------+------+---------+------+----------+----------+-----------------------------+
| 1 | SIMPLE | messages | NULL | ALL | PRIMARY | NULL | NULL | NULL | 26952821 | 11.11 | Using where; Using filesort |
+----+-------------+----------+------------+------+---------------+------+---------+------+----------+----------+-----------------------------+
This indicates that MySQL is indeed using the composite primary key I created for indexing the data, but still has to scan almost the entire table, which I do not understand. How can I optimize this table for date range queries?
Plan A:
PRIMARY KEY(date, id), -- to cluster by date
INDEX(id) -- needed to keep AUTO_INCREMENT happy
Assiming the table is quite big, having date at the beginning of the PK puts the rows in the given date range all next to each other. This minimizes (somewhat) the I/O.
Plan B:
PRIMARY KEY(id),
INDEX(date, serverid)
Now the secondary index is exactly what is needed for the one query you have provided. It is optimized for searching by date, and it is smaller than the whole table, hence even faster (I/O-wise) than Plan A.
But, if you have a lot of different queries like this, adding a lot more indexes gets impractical.
Plan C: There may be a still better way:
PRIMARY KEY(id),
INDEX(server_id, date)
In theory, it can hop through that secondary index checking each server_id. But I am not sure that such an optimization exists.
Plan D: Do you need id for anything other than providing a unique PRIMARY KEY? If not, there may be other options.
The index on (id, date) doesn't help because the first key is id not date.
You can either
(a) drop the current index and index (date, id) instead -- when date is in the first place this can be used to filter for date regardless of the following columns -- or
(b) just create an additional index only on (date) to support the query.

Please help me optimize this MySQL SELECT statement

I have a query that takes roughly four minutes to run on a high powered SSD server with no other notable processes running. I'd like to make it faster if possible.
The database stores a match history for a popular video game called Dota 2. In this game, ten players (five on each team) each select a "hero" and battle it out.
The intention of my query is to create a list of past matches along with how much of a "XP dependence" each team had, based on the heroes used. With 200,000 matches (and a 2,000,000 row matches-to-heroes relationship table) the query takes about four minutes. With 1,000,000 matches, it takes roughly 15.
I have full control of the server, so any configuration suggestions are also appreciated. Thanks for any help guys. Here are the details...
CREATE TABLE matches (
* match_id BIGINT UNSIGNED NOT NULL,
start_time INT UNSIGNED NOT NULL,
skill_level TINYINT NOT NULL DEFAULT -1,
* winning_team TINYINT UNSIGNED NOT NULL,
PRIMARY KEY (match_id),
KEY start_time (start_time),
KEY skill_level (skill_level),
KEY winning_team (winning_team));
CREATE TABLE heroes (
* hero_id SMALLINT UNSIGNED NOT NULL,
name CHAR(40) NOT NULL DEFAULT '',
faction TINYINT NOT NULL DEFAULT -1,
primary_attribute TINYINT NOT NULL DEFAULT -1,
group_index TINYINT NOT NULL DEFAULT -1,
match_count BIGINT UNSIGNED NOT NULL DEFAULT 0,
win_count BIGINT UNSIGNED NOT NULL DEFAULT 0,
* xp_from_wins BIGINT UNSIGNED NOT NULL DEFAULT 0,
* team_xp_from_wins BIGINT UNSIGNED NOT NULL DEFAULT 0,
xp_from_losses BIGINT UNSIGNED NOT NULL DEFAULT 0,
team_xp_from_losses BIGINT UNSIGNED NOT NULL DEFAULT 0,
gold_from_wins BIGINT UNSIGNED NOT NULL DEFAULT 0,
team_gold_from_wins BIGINT UNSIGNED NOT NULL DEFAULT 0,
gold_from_losses BIGINT UNSIGNED NOT NULL DEFAULT 0,
team_gold_from_losses BIGINT UNSIGNED NOT NULL DEFAULT 0,
included TINYINT UNSIGNED NOT NULL DEFAULT 0,
PRIMARY KEY (hero_id));
CREATE TABLE matches_heroes (
* match_id BIGINT UNSIGNED NOT NULL,
player_id INT UNSIGNED NOT NULL,
* hero_id SMALLINT UNSIGNED NOT NULL,
xp_per_min SMALLINT UNSIGNED NOT NULL,
gold_per_min SMALLINT UNSIGNED NOT NULL,
position TINYINT UNSIGNED NOT NULL,
PRIMARY KEY (match_id, hero_id),
KEY match_id (match_id),
KEY player_id (player_id),
KEY hero_id (hero_id),
KEY xp_per_min (xp_per_min),
KEY gold_per_min (gold_per_min),
KEY position (position));
Query
SELECT
matches.match_id,
SUM(CASE
WHEN position < 5 THEN xp_from_wins / team_xp_from_wins
ELSE 0
END) AS radiant_xp_dependence,
SUM(CASE
WHEN position >= 5 THEN xp_from_wins / team_xp_from_wins
ELSE 0
END) AS dire_xp_dependence,
winning_team
FROM
matches
INNER JOIN
matches_heroes
ON matches.match_id = matches_heroes.match_id
INNER JOIN
heroes
ON matches_heroes.hero_id = heroes.hero_id
GROUP BY
matches.match_id
Sample Results
match_id | radiant_xp_dependence | dire_xp_dependence | winning_team
2298874871 | 1.0164 | 0.9689 | 1
2298884079 | 0.9932 | 1.0390 | 0
2298885606 | 0.9877 | 1.0015 | 1
EXPLAIN
id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra
1 | SIMPLE | heroes | ALL | PRIMARY | NULL | NULL | NULL | 111 | Using temporary; Using filesort
1 | SIMPLE | matches_heroes | ref | PRIMARY,match_id,hero_id | hero_id | 2 | dota_2.heroes.hero_id | 3213 |
1 | SIMPLE | matches | eq_ref | PRIMARY | PRIMARY | 8 | dota_2.matches_heroes.match_id | 1 |
Machine Specs
Intel Xeon E5
E5-1630v3 4/8t
3.7 / 3.8 GHz
64 GB of RAM
DDR4 ECC 2133 MHz
2 x 480GB of SSD SOFT
Database
MariaDB 10.0
InnoDB
In all likelihood, the main performance driver is the GROUP BY. Sometimes, in MySQL, it can be faster to use correlated subuqeries. So, try writing the query like this:
SELECT m.match_id,
(SELECT SUM(h.xp_from_wins / h.team_xp_from_wins)
FROM matches_heroes mh INNER JOIN
heroes h
ON mh.hero_id = h.hero_id
WHERE m.match_id = mh.match_id AND mh.position < 5
) AS radiant_xp_dependence,
(SELECT SUM(h.xp_from_wins / h.team_xp_from_wins)
FROM matches_heroes mh INNER JOIN
heroes h
ON mh.hero_id = h.hero_id
WHERE m.match_id = mh.match_id AND mh.position >= 5
) AS dire_xp_dependence,
m.winning_team
FROM matches m;
Then, you want indexes on:
matches_heroes(match_id, position)
heroes(hero_id, xp_from_wins, team_xp_from_wins)
For completeness, you might want this index as well:
matches(match_id, winning_team)
This would be more important if you added order by match_id to the query.
As has already been mentioned in a comment; there is little you can do, because you select all data from the table. The query looks perfect.
The one idea that comes to mind are covering indexes. With indexes containing all data needed for the query, the tables themselves don't have to be accessed anymore.
CREATE INDEX matches_quick ON matches(match_id, winning_team);
CREATE INDEX heroes_quick ON heroes(hero_id, xp_from_wins, team_xp_from_wins);
CREATE INDEX matches_heroes_quick ON matches_heroes (match_id, hero_id, position);
There is no guarantee for this to speed up your query, as you are still reading all data, so running through the indexes may be just as much work as reading the tables. But there is a chance that the joins will be faster and there would probably be less physical read. Just give it a try.
Waiting for another idea? :-)
Well, there is always the data warehouse approach. If you must run this query again and again and always for all matches ever played, then why not store the query results and access them later?
I suppose that matches played won't be altered, so you could access all results you computed, say, last week and only retrieve the additional results from the games since then from your real tables.
Create a table archived_results. Add a flag archived in your matches table. Then add query results to the archived_results table and set the flag to TRUE for these matches. When having to perform your query, you'd either update the archived_results table anew and only show its contents then or you'd combine archive and current:
select match_id, radiant_xp_dependence, radiant_xp_dependence winning_team
from archived_results
union all
SELECT
matches.match_id,
SUM(CASE
WHEN position < 5 THEN xp_from_wins / team_xp_from_wins
ELSE 0
END) AS radiant_xp_dependence,
...
WHERE matches.archived = FALSE
GROUP BY matches.match_id;
People's comments about loading whole tables into memory got me thinking. I searched for "MySQL memory allocation" and learned how to change the buffer pool size for InnoDB tables. The default is much smaller than my database, so I ramped it up to 8 GB using the innodb_buffer_pool_size directive in my.cnf. The speed of the query increased drastically from 1308 seconds to only 114.
After researching more settings, my my.cnf file now looks like the following (no further speed improvements, but it should be better in other situations).
[mysqld]
bind-address=127.0.0.1
character-set-server=utf8
collation-server=utf8_general_ci
innodb_buffer_pool_size=8G
innodb_buffer_pool_dump_at_shutdown=1
innodb_buffer_pool_load_at_startup=1
innodb_flush_log_at_trx_commit=2
innodb_log_buffer_size=8M
innodb_log_file_size=64M
innodb_read_io_threads=64
innodb_write_io_threads=64
Thanks everyone for taking the time to help out. This will be a massive improvement to my website.

MySQL hanging on large SELECT

I'm trying to create a new table by joining four existing ones. My database is static, so making one large preprocessed table will simplify programming, and save lots of time in future queries. My query works fine when limited with a WHERE, but seems to either hang, or go too slowly to notice any progress.
Here's the working query. The result only takes a few seconds.
SELECT group.group_id, MIN(application.date), person.person_name, pers_appln.sequence
FROM group
JOIN application ON group.appln_id=application.appln_id
JOIN pers_appln ON pers_appln.appln_id=application.appln_id
JOIN person ON person.person_id=pers_appln.person_id
WHERE group_id="24601"
GROUP BY group.group_id, pers_appln.sequence
;
If I simply remove the WHERE line, it will run for days with nothing to show. Adding a CREATE TABLE newtable AS at the beginning does the same thing. It never moves beyond 0% progress.
The group, application, and person tables all use the MyISAM engine, while pers_appln uses InnoDB. The columns are all indexed. The table sizes range from about 40 million to 150 million rows. I know it's rather large, but I wouldn't think it would pose this much of a problem. The computer currently has 4GB of ram.
Any ideas how to make this work?
Here's the SHOW CREATE TABLE info. There are no views or virtual tables:
CREATE TABLE `group` (
`APPLN_ID` int(10) unsigned NOT NULL,
`GROUP_ID` int(10) unsigned NOT NULL,
KEY `idx_appln` (`APPLN_ID`),
KEY `idx_group` (`GROUP_ID`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
CREATE TABLE `application` (
`APPLN_ID` int(10) unsigned NOT NULL,
`APPLN_AUTH` char(2) NOT NULL DEFAULT '',
`APPLN_NR` varchar(20) NOT NULL DEFAULT '',
`APPLN_KIND` char(2) DEFAULT '',
`DATE` date DEFAULT NULL,
`IPR_TYPE` char(2) DEFAULT '',
PRIMARY KEY (`APPLN_ID`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
CREATE TABLE `person` (
`PERSON_ID` int(10) unsigned NOT NULL,
`PERSON_CTRY_CODE` char(2) NOT NULL,
`PERSON_NAME` varchar(300) DEFAULT NULL,
`PERSON_ADDRESS` varchar(500) DEFAULT NULL,
KEY `idx_person` (`PERSON_ID`),
) ENGINE=MyISAM DEFAULT CHARSET=utf8 MAX_ROWS=30000000 AVG_ROW_LENGTH=100
CREATE TABLE `pers_appln` (
`PERSON_ID` int(10) unsigned NOT NULL,
`APPLN_ID` int(10) unsigned NOT NULL,
`SEQUENCE` smallint(4) unsigned DEFAULT NULL,
`PLACE` smallint(4) unsigned DEFAULT NULL,
KEY `idx_pers_appln` (`APPLN_ID`),
KEY `idx_person` (`PERSON_ID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
/*!50100 PARTITION BY HASH (appln_id)
PARTITIONS 20 */
Here's the EXPLAIN of my query:
+----+-------------+-------------+--------+----------------------------+-----------------+---------+--------------------------+----------+---------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------------+--------+----------------------------+-----------------+---------+--------------------------+----------+---------------------------------+
| 1 | SIMPLE | person | ALL | idx_person | NULL | NULL | NULL | 47827690 | Using temporary; Using filesort |
| 1 | SIMPLE | pers_appln | ref | idx_application,idx_person | idx_person | 4 | mydb.person.PERSON_ID | 1 | |
| 1 | SIMPLE | application | eq_ref | PRIMARY | PRIMARY | 4 | mydb.pers_appln.APPLN_ID | 1 | |
| 1 | SIMPLE | group | ref | idx_application | idx_application | 4 | mydb.pers_appln.APPLN_ID | 1 | |
+----+-------------+-------------+--------+----------------------------+-----------------+---------+--------------------------+----------+---------------------------------+
Verify that key_buffer_size is about 200M and innodb_buffer_pool_size is about 1200M. Perhaps they could be bigger, but make sure you are not swapping.
group should have PRIMARY KEY(appln_id, group_id) and INDEX(group_id, appln_id) instead of the two KEYs it has.
pers_appln should have INDEX(person_id, appln_id) and INDEX(appln_id, person_id) instead of the two keys it has. If possible, one of those should be PRIMARY KEY, but watch out for the PARTITIONing.
A minor improvement would be to change those CHAR(2) fields to be CHARACTER SET ascii -- assuming you don't really need utf8. That would shrink the field from 6 bytes to 2 bytes per row.
The PARTITIONing is probably not helping at all. (No, I can't say that removing the PARTITIONing will speed it up much.)
If these suggestions do not help enough, please provide the output from EXPLAIN SELECT ...
Edit
Converting to InnoDB and specifying PRIMARY KEYs for all tables will help. This is because InnoDB "clusters" the PRIMARY KEY with the data. What you have now is a lot of bouncing between a MyISAM index and its data -- literally hundreds of millions of times. Assuming not everything can be cached in your small 4GB, that means a lot of disk I/O. I would not be surprised if the non-WHERE version would take a week to run. Even with InnoDB, there will be I/O, but some of it will be avoided because:
1. reaching into a table with the PK gets the data without another disk hit.
2. the extra indexes I proposed will avoid hitting the data, again avoiding an extra disk hit.
(Millions of references * "an extra disk hit" = days of time.)
If you switch all of your tables to InnoDB, you should lower key_buffer_size to 20M and raise innodb_buffer_pool_size to 1500M. (These are approximate; do not raise them so high that there is any swapping.)
Please show us the CREATE TABLEs with InnoDB -- I want to make sure each table has a PRIMARY KEY and which column(s) that is. The PRIMARY KEY makes a big difference in this particular situation.
For person, the MyISAM version has just a KEY(person_id). If you did not change the keys in the conversions, InnoDB will invent a PRIMARY KEY. When the JOIN to that table occurs, InnoDB will (1) drill down the BTree for key to find that invented PK value, then (2) drill down the PK+data BTree to find the row. If, instead, person_id could be the PK, that JOIN would run twice as fast. Possibly even faster--depending on how big the table is and how much it needs to jump around in the index / data. That is, the two BTree lookups is adding to the pressure on the cache (buffer_pool).
How big is each table? What was the final value for innodb_buffer_pool_size? Once you have changed everything from MyISAM to InnoDB, set key_buffer_size to 40M or less, and set innodb_buffer_pool_size to about 70% of available RAM. If the Data + Index sizes for all the tables are less than the buffer_pool, then (once cache is primed) the query won't have to do any I/O. This is easily a 10x speedup.
pers_appln is a many-to-many relationship? Then, probably
PRIMARY KEY(appln_id, person_id),
INDEX(person_id, appln_id) -- if you need to go the other direction, too.
I found the solution: switching to an SSD. My table creation time went from an estimated 45 days to 16 hours. Previously, the database spent all its time with hard drive I/O, barely even using 5% of the CPU or RAM.
Thanks everyone.

~150ms on a 2 million rows MySQL MyISAM table

I'm learning about MySQL performance with a pet project consisting of ~2million rows + ~600k rows (two MyISAM tables). A range query using BETWEEN on two INT(10) indexed columns, LIMITed to 1 returned result takes about 160ms (including an INNER JOIN). I figure my configuration isn't optimised and am looking for some advice on how to either diagnose, or perhaps "common configuration".
I created a gist containing both tables, the query and the contents of my.cnf.
I created the b-tree index after inserting all data which was imported from a CSV file from MaxMinds open database. I tried two separate, and now a combined index with no difference in performance.
I'm running this locally on a Macbook Pro clocking at 2,6GHz (i5) and 8GB 1600MHz RAM. MySQL is installed using the downloadable binary from mysql's download page (unable to supply a third link because my rep is to low). It's a default installation with no major additions to the my.cnf config-file, included in the gist (located under /usr/local/mysql-5.6.xxx/ directory on my system).
My concern is that I'm reaching ~160ms which indicates to me that I'm missing something. I've considered compressing the table but I have a feeling that I'm missing other configurations. Also the myisampack wasn't in my PATH (I think) so I'm considering other optimisations before I explore this further.
Any advice is appreciated!
$ mysql --version
/usr/local/mysql-5.6.23-osx10.8-x86_64/bin/mysql Ver 14.14 Distrib 5.6.23, for osx10.8 (x86_64) using EditLine wrapper
Tables
CREATE TABLE `blocks` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`begin_range` int(10) unsigned NOT NULL,
`end_range` int(10) unsigned NOT NULL,
`_location_id` int(11) unsigned DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `begin_range` (`begin_range`,`end_range`)
) ENGINE=MyISAM AUTO_INCREMENT=2008839 DEFAULT CHARSET=ascii;
CREATE TABLE `locations` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`country` varchar(2) NOT NULL DEFAULT '',
`region` varchar(255) DEFAULT NULL,
`city` varchar(255) DEFAULT NULL,
`postalcode` varchar(255) DEFAULT NULL,
`latitude` float NOT NULL,
`longitude` float NOT NULL,
`metro_code` int(11) DEFAULT NULL,
`area_code` int(11) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=641607 DEFAULT CHARSET=utf8;
Query
SELECT locations.latitude, locations.longitude
FROM blocks
INNER JOIN locations ON blocks._location_id = locations.id
WHERE INET_ATON('139.130.4.5') BETWEEN begin_range AND end_range
LIMIT 0, 1;
Edit;
Updated gist with EXPLAIN on the SELECT, also posted here for convenience.
EXPLAIN SELECT locations.latitude, locations.longitude FROM blocks INNER JOIN locations ON blocks._location_id = locations.id WHERE INET_ATON('94.137.106.123') BETWEEN begin_range AND end_range LIMIT 0, 1;
+----+-------------+-----------+--------+---------------+-------------+---------+---------------------------+---------+------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-----------+--------+---------------+-------------+---------+---------------------------+---------+------------------------------------+
| 1 | SIMPLE | blocks | range | begin_range | begin_range | 4 | NULL | 1095345 | Using index condition; Using where |
| 1 | SIMPLE | locations | eq_ref | PRIMARY | PRIMARY | 4 | geoip.blocks._location_id | 1 | NULL |
+----+-------------+-----------+--------+---------------+-------------+---------+---------------------------+---------+------------------------------------+
2 rows in set (0.00 sec)
Edit 2; Included data into the question for convenience.
The problem, and the normal approach (which your code exemplifies) leads to hitting 1095345 rows. I have an approach that can do that query in one disk hit, even the cache is cold.
Excerpts from http://mysql.rjweb.org/doc.php/ipranges :
The Situation
Your data includes a large set of non-overlapping 'ranges'. These could be IP addresses, datetimes (show times for a single station), zipcodes, etc.
You have pairs of start and end values; one 'item' belongs to each such 'range'. So, instinctively, you create a table with start and end of the range, plus info about the item. Your queries involve a WHERE clause that compares for being between the start and end values.
The Problem
Once you get a large set of items, performance degrades. You play with the indexes, but find nothing that works well. The indexes fail to lead to optimal functioning because the database does not understand that the ranges are non-overlapping.
The Solution
I will present a solution that enforces the fact that items cannot have overlapping ranges. The solution builds a table to take advantage of that, then uses Stored Routines to get around the clumsiness imposed by it.

MySQL I/O bound InnoDB query optimization problem without setting innodb_buffer_pool_size to 5GB

I got myself into a MySQL design scalability issue. Any help would be greatly appreciated.
The requirements:
Storing users' SOCIAL_GRAPH and USER_INFO about each user in their social graph. Many concurrent reads and writes per second occur. Dirty reads acceptable.
Current design:
We have 2 (relevant) tables. Both InnoDB for row locking, instead of table locking.
USER_SOCIAL_GRAPH table that maps a logged in (user_id) to another (related_user_id). PRIMARY key composite user_id and related_user_id.
USER_INFO table with information about each related user. PRIMARY key is (related_user_id).
Note 1: No relationships defined.
Note 2: Each table is now about 1GB in size, with 8 million and 2 million records, respectively.
Simplified table SQL creates:
CREATE TABLE `user_social_graph` (
`user_id` int(10) unsigned NOT NULL,
`related_user_id` int(11) NOT NULL,
PRIMARY KEY (`user_id`,`related_user_id`),
KEY `user_idx` (`user_id`)
) ENGINE=InnoDB;
CREATE TABLE `user_info` (
`related_user_id` int(10) unsigned NOT NULL,
`screen_name` varchar(20) CHARACTER SET latin1 DEFAULT NULL,
[... and many other non-indexed fields irrelevant]
`last_updated` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`related_user_id`),
KEY `last_updated_idx` (`last_updated`)
) ENGINE=InnoDB;
MY.CFG values set:
innodb_buffer_pool_size = 256M
key_buffer_size = 320M
Note 3: Memory available 1GB, these 2 tables are 2GBs, other innoDB tables 3GB.
Problem:
The following example SQL statement, which needs to access all records found, takes 15 seconds to execute (!!) and num_results = 220,000:
SELECT SQL_NO_CACHE COUNT(u.related_user_id)
FROM user_info u LEFT JOIN user_socialgraph u2 ON u.related_user_id = u2.related_user_id
WHERE u2.user_id = '1'
AND u.related_user_id = u2.related_user_id
AND (NOT (u.related_user_id IS NULL));
For a user_id with a count of 30,000, it takes about 3 seconds (!).
EXPLAIN EXTENDED for the 220,000 count user. It uses indices:
+----+-------------+-------+--------+------------------------+----------+---------+--------------------+--------+----------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-------+--------+------------------------+----------+---------+--------------------+--------+----------+--------------------------+
| 1 | SIMPLE | u2 | ref | user_user_idx,user_idx | user_idx | 4 | const | 157320 | 100.00 | Using where |
| 1 | SIMPLE | u | eq_ref | PRIMARY | PRIMARY | 4 | u2.related_user_id | 1 | 100.00 | Using where; Using index |
+----+-------------+-------+--------+------------------------+----------+---------+--------------------+--------+----------+--------------------------+
How do we speed these up without setting innodb_buffer_pool_size to 5GB?
Thank you!
The user_social_graph table is not indexed correctly !!!
You have ths:
CREATE TABLE user_social_graph
(user_id int(10) unsigned NOT NULL,
related_user_id int(11) NOT NULL,
PRIMARY KEY (user_id,related_user_id),
KEY user_idx (user_id))
ENGINE=InnoDB;
The second index is redundant since the first column is user_id. You are attempting to join the related_user_id column over to the user_info table. That column needed to be indexed.
Change user_social_graphs as follows:
CREATE TABLE user_social_graph
(user_id int(10) unsigned NOT NULL,
related_user_id int(11) NOT NULL,
PRIMARY KEY (user_id,related_user_id),
UNIQUE KEY related_user_idx (related_user_id,user_id))
ENGINE=InnoDB;
This should change the EXPLAIN PLAN. Keep in mind that the index order matters depending the the way you query the columns.
Give it a Try !!!
What is the MySQL version? Its manual contains important information for speeding up statements and code in general;
Change your paradigm to a data warehouse capable to manage till terabyte table. Migrate your legacy MySQL data base with free tool or application to the new paradigm. This is an example: http://www.infobright.org/Downloads/What-is-ICE/ many others (free and commercial).
PostgreSQL is not commercial and there a lot of tools to migrate MySQL to it!