Composite index not used in mysql - mysql

According to MySQL docs a composite index will still be used if the leftmost fields are part of the criteria. However, this table will not join correctly with the primary key; I had to add another index of the left two fields which is then used.
One of the tables is memory, and I know that by default memory uses a hash index which can't be used for group/order. However I'm using all rows of the memory table and not the index, so I don't think that relates to the problem.
What am I missing?
mysql> show create table pr_temp;
| pr_temp | CREATE TEMPORARY TABLE `pr_temp` (
`player_id` int(10) unsigned NOT NULL,
`insert_date` date NOT NULL,
[...]
PRIMARY KEY (`player_id`,`insert_date`) USING BTREE,
KEY `insert_date` (`insert_date`)
) ENGINE=MEMORY DEFAULT CHARSET=utf8 |
mysql> show create table player_game_record;
| player_tank_record | CREATE TABLE `player_game_record` (
`player_id` int(10) unsigned NOT NULL,
`game_id` smallint(5) unsigned NOT NULL,
`insert_date` date NOT NULL,
[...]
PRIMARY KEY (`player_id`,`insert_date`,`game_id`),
KEY `insert_date` (`insert_date`),
KEY `player_date` (`player_id`,`insert_date`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 DATA DIRECTORY='...' INDEX DIRECTORY='...' |
mysql> explain select pgr.* from player_game_record pgr inner join pr_temp on pgr.player_id = pr_temp.player_id and pgr.insert_date = pr_temp.date_prev;
+----+-------------+---------+------+---------------------------------+-------------+---------+-------------------------------------------------------------------------+--------+-------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------+------+---------------------------------+-------------+---------+-------------------------------------------------------------------------+--------+-------+
| 1 | SIMPLE | pr_temp | ALL | PRIMARY | NULL | NULL | NULL | 174683 | |
| 1 | SIMPLE | pgr | ref | PRIMARY,insert_date,player_date | player_date | 7 | test_gamedb.pr_temp.player_id,test_gamedb.pr_temp.date_prev | 21 | |
+----+-------------+---------+------+---------------------------------+-------------+---------+-------------------------------------------------------------------------+--------+-------+
2 rows in set (0.00 sec)
mysql> explain select pgr.* from player_game_record pgr force index (primary) inner join pr_temp on pgr.player_id = pr_temp.player_id and pgr.insert_date = pr_temp.date_prev;
+----+-------------+---------+------+---------------+---------+---------+-------------------------------------------------------------------------+---------+-------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------+------+---------------+---------+---------+-------------------------------------------------------------------------+---------+-------+
| 1 | SIMPLE | pr_temp | ALL | PRIMARY | NULL | NULL | NULL | 174683 | |
| 1 | SIMPLE | pgr | ref | PRIMARY | PRIMARY | 7 | test_gamedb.pr_temp.player_id,test_gamedb.pr_temp.date_prev | 2873031 | |
+----+-------------+---------+------+---------------+---------+---------+-------------------------------------------------------------------------+---------+-------+
2 rows in set (0.00 sec)
I think the primary key should work, with the two left columns (player_id, insert_date) being used. However it will use the player_date index by default, and if I force it to use the primary index it looks like it's only using one field rather than both.
Update2: Mysql version 5.5.27-log
Update3:
(note this is after removing the player_date index while trying some other tests)
mysql> show indexes in player_game_record;
+--------------------+------------+-------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+--------------------+------------+-------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| player_game_record | 0 | PRIMARY | 1 | player_id | A | NULL | NULL | NULL | | BTREE | | |
| player_game_record | 0 | PRIMARY | 2 | insert_date | A | NULL | NULL | NULL | | BTREE | | |
| player_game_record | 0 | PRIMARY | 3 | game_id | A | 576276246 | NULL | NULL | | BTREE | | |
| player_game_record | 1 | insert_date | 1 | insert_date | A | 33304 | NULL | NULL | | BTREE | | |
+--------------------+------------+-------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
4 rows in set (1.08 sec)
mysql> select count(*) from player_game_record;
+-----------+
| count(*) |
+-----------+
| 576276246 |
+-----------+
1 row in set (0.00 sec)

I agree that your use of the MEMORY storage engine for one of the tables should not at all be an issue here, since we're talking about the other table.
I also agree that the leftmost prefix of an index can be used exactly how you are trying to use it, and I cannot think of any reason why the primary key could not be used in exactly the same way as any other index.
This has been a head-scratcher. The new index you created "should" be the same as the left side of the primary key, so why don't they behave the same way? I have two thoughts, both of which lead me to the same recommendation, even though I am not as familiar with the internals of MyISAM as I am with InnoDB. (As an aside, I'd recommend InnoDB over MyISAM.)
The index on your primary key was presumably on the table when you began inserting data, while the new index was added while most or all of the data was already there. This suggests that your new index is nice and cleanly-organized internally, while your primary key index may be highly fragmented, having been built as the data was loaded.
The row count the optimizer shows is based on index statistics, which may be inaccurate on your primary key due to the insert order.
The fragmentation theory may explain why querying with the primary key as your index is not as fast; the index statistics theory may explain why the optimizer comes up with such a different row count and it may explain why the optimizer might have been choosing a full table scan instead of using that index (which is only a guess, since we don't have the explain available).
The thing I would suggest based on these two thoughts is running OPTIMIZE TABLE on your table. If it took 12 hours to build that new index, then optimizing the table may very possibly take that long or longer.
Possibly helpful: http://www.dbasquare.com/2012/07/09/data-fragmentation-problem-in-mysql-myisam/

Related

MySQL index does not work if I select all fields

I have a simple table like this,
CREATE TABLE `domain` (
`id` varchar(191) NOT NULL,
`time` bigint(20) DEFAULT NULL,
`task_id` bigint(20) DEFAULT NULL,
`name` varchar(512) DEFAULT NULL
PRIMARY KEY (`id`),
KEY `idx_domain_time` (`time`),
KEY `idx_domain_task_id` (`task_id`),
FULLTEXT KEY `idx_domain_name` (`name`),
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4
And indexed like this:
mysql> show index from domain;
+--------+------------+------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+---------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | Ignored |
+--------+------------+------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+---------+
| domain | 0 | PRIMARY | 1 | id | A | 2036092 | NULL | NULL | | BTREE | | | NO |
| domain | 1 | idx_domain_name | 1 | name | NULL | NULL | NULL | NULL | YES | FULLTEXT |
+--------+------------+------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+---------+
Index is used when I select only the id field:
mysql> explain SELECT id FROM `domain` WHERE task_id = '3';
+------+-------------+--------+------+--------------------+--------------------+---------+-------+---------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+--------+------+--------------------+--------------------+---------+-------+---------+-------------+
| 1 | SIMPLE | domain | ref | idx_domain_task_id | idx_domain_task_id | 9 | const | 1018046 | Using index |
+------+-------------+--------+------+--------------------+--------------------+---------+-------+---------+-------------+
1 row in set (0.00 sec)
When I select all fields, it does not work:
mysql> explain SELECT * FROM `domain` WHERE task_id = '3';
+------+-------------+--------+------+--------------------+------+---------+------+---------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+--------+------+--------------------+------+---------+------+---------+-------------+
| 1 | SIMPLE | domain | ALL | idx_domain_task_id | NULL | NULL | NULL | 2036092 | Using where |
+------+-------------+--------+------+--------------------+------+---------+------+---------+-------------+
1 row in set (0.00 sec)
mysql> explain SELECT id, name FROM `domain` WHERE task_id = '3';
+------+-------------+--------+------+--------------------+------+---------+------+---------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+--------+------+--------------------+------+---------+------+---------+-------------+
| 1 | SIMPLE | domain | ALL | idx_domain_task_id | NULL | NULL | NULL | 2036092 | Using where |
+------+-------------+--------+------+--------------------+------+---------+------+---------+-------------+
1 row in set (0.00 sec)
What's wrong?
Indexes other than the Primary Key work by storing data for the indexed field(s) in index order, along with the primary key.
So when you SELECT the primary key by the indexed field, there is enough information in the index to completely satisfy the query. When you add other fields, there's no longer enough information in the index. That doesn't mean the database won't use the index, but now it's no longer as much of a slam dunk, and it comes down more to table statistics.
MySql optimizer will try to achieve the best performance so it may ignore an index. You can force optimizer to use the index you want if you are sure that will give you better performance. You can use :
SELECT * FROM `domain` USE INDEX (idx_domain_task_id) WHERE task_id = '3';
For more details please see this page Index Hints .

How can I optimize/measure UPSERT performance on MYSQL?

How can I measure mysql "UPSERT" performance? More specifically, get information about the implied search before the insert/update/replace?
using mysql 8, with a schema that has three fields. Two are part of the primary key. Table is currently innodb but that is not a hard requirement.
CREATE TABLE IF NOT EXISTS `test`.`recent`
( `uid` int NOT NULL, `gid` int NOT NULL, `last` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`uid`,`gid`),
KEY `idx_last` (`last`) USING BTREE
) ENGINE=InnoDB;
+----------+----------+------+-----+-------------------+-------+
| Field | Type | Null | Key | Default | Extra |
+----------+----------+------+-----+-------------------+-------+
| uid | int(11) | NO | PRI | NULL | |
| gid | int(11) | NO | PRI | NULL | |
| last | datetime | NO | MUL | CURRENT_TIMESTAMP | |
+----------+----------+------+-----+-------------------+-------+
I plan to insert values using
INSERT INTO test.recent (uid,gid) VALUES (1, 1)
ON DUPLICATE KEY UPDATE last=NOW();
How do I go about figuring out the performance of this query, since EXPLAIN will not show the implied search, only the insert:
MYSQL> explain INSERT INTO test.recent (uid,gid) VALUES (1, 1) ON DUPLICATE KEY UPDATE last=NOW();
+----+-------------+--------+------------+------+---------------+------+---------+------+------+----------+-------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+--------+------------+------+---------------+------+---------+------+------+----------+-------+
| 1 | INSERT | recent | NULL | ALL | NULL | NULL | NULL | NULL | NULL | NULL | NULL |
+----+-------------+--------+------------+------+---------------+------+---------+------+------+----------+-------+
1 row in set (0.00 sec)
MYSQL> explain INSERT INTO test.recent (uid,gid) VALUES (1, 1);
+----+-------------+--------+------------+------+---------------+------+---------+------+------+----------+-------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+--------+------------+------+---------------+------+---------+------+------+----------+-------+
| 1 | INSERT | recent | NULL | ALL | NULL | NULL | NULL | NULL | NULL | NULL | NULL |
+----+-------------+--------+------------+------+---------------+------+---------+------+------+----------+-------+
1 row in set (0.00 sec)
which is different from the explain on an actual search:
MYSQL> explain select last from test.recent where uid=1 and gid=1;
+----+-------------+--------+------------+-------+---------------+---------+---------+-------------+------+----------+-------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+--------+------------+-------+---------------+---------+---------+-------------+------+----------+-------+
| 1 | SIMPLE | recent | NULL | const | PRIMARY | PRIMARY | 8 | const,const | 1 | 100.00 | NULL |
+----+-------------+--------+------------+-------+---------------+---------+---------+-------------+------+----------+-------+
1 row in set, 1 warning (0.00 sec)
One of the variables I am trying to figure out, is if performance would change at all if I use a blind update instead:
MYSQL> explain REPLACE INTO test.recent VALUES (1, 1, NOW());
+----+-------------+--------+------------+------+---------------+------+---------+------+------+----------+-------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+--------+------------+------+---------------+------+---------+------+------+----------+-------+
| 1 | REPLACE | recent | NULL | ALL | NULL | NULL | NULL | NULL | NULL | NULL | NULL |
+----+-------------+--------+------------+------+---------------+------+---------+------+------+----------+-------+
1 row in set (0.01 sec)
But as you can see, the information i get is the same (unhelpful) as I get for an "explain insert".
Another question I would like to answer based on measurements, is if things would change for better or worse (and by how much) If i tested both upserts approaches (on duplicate vs replace) with a DATE field (instead of the DATETIME), which in theory would results in less writes (but still the same number of implied searches). But again, explain is no help here.
Don't trust EXPLAIN with very few rows.
Your IODKU is optimal. Here's how it will work:
1-. Like a SELECT, drill down the PRIMARY KEY BTree to find the row with (1,1) or a gap where (1,1) should be. That is about as fast a lookup as can be had.
2a. If the row exists, UPDATE it.
2b. If the row does not exist, INSERT (and set last to the DEFAULT of `CURRENT_TIMESTAMP).
If you want, I can go into details of the steps. But we are talking sub-millisecond for each.
If I wanted to time the query, I would use some high-res timer. Very likely the timings would bounce around, depending of whether there is a breeze blowing, or a butterfly is flapping its wings, or the phase of the moon.
Caveat: If you "simplified" your query for this question, I may not be giving you correct info for the real query.
If you are doing a thousand IODKUs, then there may be optimizations that involve combining them. There are some typical optimizations that can easily give 10x speedup.

why index is not using in order by (foreign key)

when i use id (primary key) with order by clause it uses index named PRIMARY but when i use countrycode (foreign key) with order by clause it does't uses index. my output is below.
mysql> SHOW CREATE TABLE City;
+-------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+-------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| City | CREATE TABLE `City` (
| | `ID` int(11) NOT NULL AUTO_INCREMENT,
| | `Name` char(35) NOT NULL DEFAULT '',
| | `CountryCode` char(3) NOT NULL DEFAULT '',
| | `District` char(20) NOT NULL DEFAULT '',
| | `Population` int(11) NOT NULL DEFAULT '0',
| | PRIMARY KEY (`ID`),
| | KEY `CountryCode` (`CountryCode`),
| | CONSTRAINT `city_ibfk_1` FOREIGN KEY (`CountryCode`) REFERENCES `Country` (`Code`)
| | ) ENGINE=InnoDB AUTO_INCREMENT=4080 DEFAULT CHARSET=latin1 |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
mysql> EXPLAIN SELECT * FROM City ORDER BY ID;
+----+-------------+-------+-------+---------------+---------+---------+------+------+-------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------+---------------+---------+---------+------+------+-------+
| 1 | SIMPLE | City | index | NULL | PRIMARY | 4 | NULL | 4321 | |
+----+-------------+-------+-------+---------------+---------+---------+------+------+-------+
1 row in set (0.00 sec)
mysql> EXPLAIN SELECT * FROM City ORDER BY COUNTRYCODE;
+----+-------------+-------+------+---------------+------+---------+------+------+----------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+------+---------------+------+---------+------+------+----------------+
| 1 | SIMPLE | City | ALL | NULL | NULL | NULL | NULL | 4321 | Using filesort |
+----+-------------+-------+------+---------------+------+---------+------+------+----------------+
1 row in set (0.00 sec)
In innodb primary keys are so called "clustered indexes".
It means that the rows physically ordered according the PK values.
Because of that the rows are naturally sorted so it's cheap to read them sorted ASC or DESC.
Another story is when you order by another column.
To use that mysql would have to read both index and data pages, which dramatically increases IO. So mysql decides to sort it in memory instead (because according to its heuristics memory sort is faster than increased IO). If you want to see mysql using that index for sorting you need:
Increase total number of rows to, say several dozens of thousands
Select only a small subset, like LIMIT 10
Then mysql might decide to use index.

Optimizing a MySql query: Query only having 'using where' in explain extra

I have table with following schema:
+----------------------+--------------+--------------+-----+---------+-----------+
| Field | Type | Null | Key | Default | Extra |
+----------------------+--------------+--------------+-----+---------+-----------+
| request_id | bigint(20) | NO | PRI | | |
| marketplace_id | int(11) | NO | PRI | | |
| feed_attribute_name | varchar(256) | NO | PRI | | |
| full_update_count | int(11) | NO | | | |
| partial_update_count | int(11) | NO | | | |
| ptd | varchar(256) | NO | PRI | | |
| processed_date | datetime | NO | PRI | | |
+----------------------+--------------+--------------+-----+---------+-----------+
and I am querying it like this:
EXPLAIN SELECT SUM(full_update_count) as total FROM
x.attribute_usage_information WHERE marketplace_id=6
AND ptd='Y' AND processed_date>2013-12-31 AND
feed_attribute_name='abc'
The query plan is:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE X ALL 1913668816 Using where
I am new to query optimization so my inferences can be wrong.
I am surprised that it is not using index which can be a reason for its slow execultion(around an hour). The table size is of order of 10^10. Can this query be rewritten so that it uses index because where clause is part a subset of the primary key set for the table?
EDIT: SHOW INDEX result
+---------------------------+------------+------------+--------------+----------------+------
|Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation Cardinality Sub_part Packed Null Index_type Comment
|attribute_usage_information | 0 | PRIMARY | 1 | request_id | A 2901956 BTREE
|attribute_usage_information | 0 | PRIMARY | 2 | marketplace_id | A 2901956 BTREE
|attribute_usage_information | 0 | PRIMARY | 3 | | feed_attribute_name A 273613033 BTREE
|attribute_usage_information | 0 | PRIMARY | 4 | ptd | A 1915291236 BTREE
|attribute_usage_information | 0 | PRIMARY | 5 | processed_date | A 1915291236 BTREE
EDIT 2: SHOW GRANT RESULT
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER ON *.* TO 'data_usage_rw'#'%' IDENTIFIED BY PASSWORD *** WITH GRANT OPTION
Your query:
SELECT SUM(full_update_count) as total
FROM x.attribute_usage_information
WHERE marketplace_id=6 AND ptd='Y' AND processed_date>2013-12-31 AND
feed_attribute_name='abc';
The "using where" is saying that MySQL is doing a full table scan. This is a simple query, so the only optimization approach is to create an index that reduces the number of rows being processed. he best index for this query is x.attribute_usage_information(marketplace_id, ptd, feed_attribute_name, processed_date, full_update_count).
You can create it as:
create index attribute_usage_information_idx on x.attribute_usage_information(marketplace_id, ptd, feed_attribute_name, processed_date, full_update_count);
By including full_update_count, this is a covering index. That further speeds the query because all columns used in the query are in the index. The execution engine does not need to look up values on the original data pages.
Cover your WHERE conditions with a composite index(marketplace_id,ptd,processed_date,feed_attribute_name)
ALTER TABLE `tablename` ADD INDEX (marketplace_id,ptd,processed_date,feed_attribute_name)
Be patient,it will take a while.

Trying to efficiently delete records in table which has multicolumn index

I am using MySQL 5.6 on Linux (RHEL). The database client is a Java program. The table in question (MyISAM or InnoDB, have tried both) has a multicolumn index comprising two integers (id's from other tables) and a timestamp.
I want to delete records which have timestamps before a given date. I have found that this operation is relatively slow (on the order of 30 seconds in a table which has a few million records). But I've also found that if the other two fields in the index are specified, the operation is much faster. No big surprise there.
I believe I could query the two non-timestamp tables for their index values and then loop over the delete operation, specifying one value of each id each time. I hope that wouldn't take too long; I haven't tried it yet. But it seems like I should be able to get MySQL to do the looping for me. I tried a query of the form
delete from mytable where timestamp < '2013-08-17'
and index1 in (select id from foo)
and index2 in (select id from bar);
but that's actually slower than
delete from mytable where timestamp < '2013-08-17';
Two questions. (1) Is there something I can do to speed up delete operations which depend only on timestamp? (2) Failing that, is there something I can do to get MySQL to loop over two other two index columns (and do it quickly)?
I actually tried this operation with both MyISAM and InnoDB tables with the same data -- they are approximately equally slow.
Thanks in advance for any light you can shed on this problem.
EDIT: More info about the table structure. Here is the output of show create table mytable:
CREATE TABLE `mytable` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`timestamp` datetime NOT NULL,
`fooId` int(10) unsigned NOT NULL,
`barId` int(10) unsigned NOT NULL,
`baz` double DEFAULT NULL,
`quux` varchar(16) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `fooId` (`fooId`,`barId`,`timestamp`)
) ENGINE=InnoDB AUTO_INCREMENT=14221944 DEFAULT CHARSET=latin1 COMMENT='stuff'
Here is the output of show indexes from mytable:
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
|mytable| 0 | PRIMARY | 1 | id | A | 2612681 | NULL | NULL | | BTREE | | |
|mytable| 0 | fooId | 1 | fooId | A | 20 | NULL | NULL | | BTREE | | |
|mytable| 0 | fooId | 2 | barId | A | 3294 | NULL | NULL | | BTREE | | |
|mytable| 0 | fooId | 3 | timestamp | A | 2612681 | NULL | NULL | | BTREE | | |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
EDIT: More info -- output from "explain".
mysql> explain delete from mytable using mytable inner join foo inner join bar where mytable.fooId=foo.id and mytable.barId=bar.id and timestamp<'2012-08-27';
+----+-------------+-------+-------+---------------+---------+---------+-------------------------------+------+----------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------+---------------+---------+---------+-------------------------------+------+----------------------------------------------------+
| 1 | SIMPLE | foo | index | PRIMARY | name | 257 | NULL | 26 | Using index |
| 1 | SIMPLE | bar | index | PRIMARY | name | 257 | NULL | 38 | Using index; Using join buffer (Block Nested Loop) |
| 1 | SIMPLE |mytable| ref | fooId | fooId | 8 | foo.foo.id,foo.bar.id | 211 | Using where |
+----+-------------+-------+-------+---------------+---------+---------+-------------------------------+------+----------------------------------------------------+
Use the multiple-table DELETE syntax to join the tables:
DELETE mytable
FROM mytable
JOIN foo ON foo.id = mytable.index1
JOIN bar ON bar.id = mytable.index2
WHERE timestamp < '2013-08-17'
I think that this should perform particularly well if mytable has a composite index over (index1, index2, timestamp) (and both foo and bar have indexes on their id columns, which will of course be the case if those columns are PK).
Forget about the other two ids. Add an index just on the time stamp. Otherwise you may be traversing the whole table.