Django "UPDATE `django_session`" killing performances - mysql

My Django client was very slow during MySQL queries. Doing a tcpdump on the MySQL server, I saw that the following SQL query
"UPDATE `django_session` SET `session_data` = " [data]
is very long. What is the simplest way to solve this problem ?
Thank you.
------------------------- EDIT 1
the size of [data] is very big and it is taking a very long time to transfer.
------------------------- EDIT 2
mysql> SHOW INDEX FROM django_session;
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+----------------+------------+-------------------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| django_session | 0 | PRIMARY | 1 | session_key | A | 9496 | NULL | NULL | | BTREE | | |
| django_session | 1 | django_session_expire_date_a5c62663 | 1 | expire_date | A | 9496 | NULL | NULL | | BTREE | | |
2 rows in set (0,02 sec)
mysql> SHOW CREATE TABLE django_session;
| Table | Create Table |
+----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| django_session | CREATE TABLE django_session (
session_key varchar(40) NOT NULL,
session_data longtext NOT NULL,
expire_date datetime NOT NULL,
PRIMARY KEY (session_key),
KEY django_session_expire_date_a5c62663 (expire_date)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 |
1 row in set (0,01 sec)

Related

Drop Column from an index mysql

i'am having these indexes on my table:
mysql> SHOW INDEXES from sous_categories;
+-----------------+------------+----------+--------------+-------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+---------+------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | Visible | Expression |
+-----------------+------------+----------+--------------+-------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+---------+------------+
| sous_categories | 0 | PRIMARY | 1 | id_sous_categorie | A | 16 | NULL | NULL | | BTREE | | | YES | NULL |
| sous_categories | 0 | PRIMARY | 2 | categorie_id | A | 16 | NULL | NULL | | BTREE | | | YES | NULL |
+-----------------+------------+----------+--------------+-------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+---------+------------+
2 rows in set (0.01 sec)
and i want to remove the second column, by checking MySQL INDEX STATMENT, i figured out that i can drop only by index_name, but in my case both has the same key_name.
How can i do that please ? (I know it's possible because i can do it with phpmyadmin but i want to know how to do it with command line)
ALTER TABLE sous_categories
DROP PRIMAR KEY,
ADD PRIMARY KEY(id_sous_categorie);
However, to be the PK, id_sous_categorie must be Unique (no dups) and NOT NULL.
The only effects will be
Time spent rebuilding the table. (Any change to the PK requires a rebuild.)
A fatal error if id_sous_categorie is not already unique. (Previously the pair of columns was constrained to be unique, but not each column.)

why mysql explain show using index prefer bigint column than int column

Recently ,i got on a strange question,my test table structure:
CREATE TABLE `index_test` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`a` varchar(64) NOT NULL DEFAULT '',
`card_no` bigint(20) NOT NULL,
`card_no2` bigint(20) NOT NULL,
`optype` int(11) NOT NULL,
`optype2` int(11) NOT NULL,
`create_time` datetime NOT NULL DEFAULT '2000-01-01 00:00:00',
`_timestamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
KEY `idx_a` (`a`),
KEY `idx_card_no` (`card_no`),
KEY `idx_card_no2` (`card_no2`),
KEY `idx_optype` (`optype`),
KEY `idx_optype2` (`optype2`)
) ENGINE=InnoDB AUTO_INCREMENT=10000 DEFAULT CHARSET=utf8;
5 major columns,a varchar,cardno and cardno2 are bigint,optype and optype2 are int,
as my experience,mysql index prefer select high cardinality、small data type and non null columns,but when i run explain query statements,a few problems occurred,here is my init data procedure
DELIMITER ;;
CREATE DEFINER=`xx`#`%` PROCEDURE `simple_insert`( )
BEGIN
DECLARE counter BIGINT DEFAULT 0;
my_loop: LOOP
SET counter=counter+1;
IF counter=10000 THEN
LEAVE my_loop;
END IF;
INSERT INTO `index_test` (`a`,`card_no`,`card_no2`,`optype`,`optype2`, `create_time`) VALUES (replace(uuid(), '-', ''),counter,counter%180, counter,counter%180,current_timestamp);
END LOOP my_loop;
END;;
DELIMITER ;
insert 10,000 row data,first i execute the statistics query
select * from information_schema.statistics where table_schema = 'test' and table_name = 'index_test';
output
+---------------+--------------+------------+------------+--------------+--------------+--------------+-------------+-----------+-------------+----------+--------+----------+------------+---------+---------------+
| TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME | NON_UNIQUE | INDEX_SCHEMA | INDEX_NAME | SEQ_IN_INDEX | COLUMN_NAME | COLLATION | CARDINALITY | SUB_PART | PACKED | NULLABLE | INDEX_TYPE | COMMENT | INDEX_COMMENT |
+---------------+--------------+------------+------------+--------------+--------------+--------------+-------------+-----------+-------------+----------+--------+----------+------------+---------+---------------+
| def | test | index_test | 0 | test | PRIMARY | 1 | id | A | 10089 | NULL | NULL | | BTREE | | |
| def | test | index_test | 1 | test | idx_a | 1 | a | A | 9999 | NULL | NULL | | BTREE | | |
| def | test | index_test | 1 | test | idx_card_no | 1 | card_no | A | 9999 | NULL | NULL | | BTREE | | |
| def | test | index_test | 1 | test | idx_card_no2 | 1 | card_no2 | A | 180 | NULL | NULL | | BTREE | | |
| def | test | index_test | 1 | test | idx_optype | 1 | optype | A | 9999 | NULL | NULL | | BTREE | | |
| def | test | index_test | 1 | test | idx_optype2 | 1 | optype2 | A | 180 | NULL | NULL | | BTREE | | |
+---------------+--------------+------------+------------+--------------+--------------+--------------+-------------+-----------+-------------+----------+--------+----------+------------+---------+---------------+
step 2:
explain select * from index_test where optype=9600 and a= 'e095af180f4911ea8d907036bd142a99';
output:
+----+-------------+------------+------------+------+------------------+-------+---------+-------+------+----------+-------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+------------+------------+------+------------------+-------+---------+-------+------+----------+-------------+
| 1 | SIMPLE | index_test | NULL | ref | idx_a,idx_optype | idx_a | 194 | const | 1 | 5.00 | Using where |
+----+-------------+------------+------------+------+------------------+-------+---------+-------+------+----------+-------------+
as my experience ,varchar(64) space is bigger than int,so use int column is ok
step3:
explain select * from index_test where optype=9600 and card_no = 9600;
output
+----+-------------+------------+------------+------+------------------------+-------------+---------+-------+------+----------+-------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+------------+------------+------+------------------------+-------------+---------+-------+------+----------+-------------+
| 1 | SIMPLE | index_test | NULL | ref | idx_card_no,idx_optype | idx_card_no | 8 | const | 1 | 5.00 | Using where |
+----+-------------+------------+------------+------+------------------------+-------------+---------+-------+------+----------+-------------+
so ,the question is why mysql query optimizer prefer use bigint column than int column,any one can help me or give some offcinal document links about this question ,thanks。
by the way,my test environment is macos(10.14.6) x64 and mysql server version is 5.7.26
I don't think INT vs BIGINT is the issue. First, let me mention better indexes:
For
where optype=9600 and a= 'e095af180f4911ea8d907036bd142a99'
Either of these "composite" indexes would be optimal, and better than what you have:
INDEX(optype, a)
INDEX(a, optype)
For
where optype=9600
and card_no = 9600
and a= 'e095af180f4911ea8d907036bd142a99'
any index starting with those 3 columns is optimal; any 2 would be "good", and single-column indexes would be poor, but better than no index.
The optimizer may be making probes to see which of your 3 poor indexes is best.
I can't explain why it did not list a as a "Possible key".

MySQL `INSERT INTO SELECT` clause generate Duplicate entry error on unique field

I use INSERT INTO SELECT to migrate users' data across databases, but it generates
Duplicate entry '  ' for key 'users_name_unique'
although the data source is another unique index and should not contain any duplicate data.('users_name_unique' is the index name on db2.users)
Here is the query, where name field from db2.users is varchar(50) unique and not null index, and name field from db1.users is varchar(60) unique and not null index. I have already checked the length of the field in every record, and the lengths are all much smaller than 50.
INSERT INTO db2.users (name, email, uid) SELECT
name,
IF (mail = '', NULL, mail) AS email,
uid
FROM
db1.users;
There are non-printable or white spaces in the name field from db1.users.
What might be the problem?
update
I created multi test tables, and as following, there are two tables with very similar structure and no data (I changed the length on purpose since the source data is varchar(60)), but different results.
mysql> desc ttt3;
+-------+-------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| name | varchar(50) | NO | UNI | NULL | |
+-------+-------------+------+-----+---------+----------------+
2 rows in set (0.01 sec)
mysql> desc users;
+-------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------+------------------+------+-----+---------+----------------+
| id | int(10) unsigned | NO | PRI | NULL | auto_increment |
| name | varchar(60) | NO | UNI | NULL | |
+-------+------------------+------+-----+---------+----------------+
2 rows in set (0.01 sec)
mysql> show index from ttt3;
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| ttt3 | 0 | PRIMARY | 1 | id | A | 0 | NULL | NULL | | BTREE | | |
| ttt3 | 0 | name | 1 | name | A | 0 | NULL | NULL | | BTREE | | |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
2 rows in set (0.00 sec)
mysql> show index from users;
+-------+------------+-------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+-------+------------+-------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| users | 0 | PRIMARY | 1 | id | A | 0 | NULL | NULL | | BTREE | | |
| users | 0 | users_name_unique | 1 | name | A | 0 | NULL | NULL | | BTREE | | |
+-------+------------+-------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
2 rows in set (0.00 sec)
mysql> insert into ttt3(name) select name from scratch.users where scratch.users.uid != 0;
Query OK, 1556 rows affected (0.24 sec)
Records: 1556 Duplicates: 0 Warnings: 0
mysql> insert into users(name) select name from scratch.users where scratch.users.uid != 0;
ERROR 1062 (23000): Duplicate entry '  ' for key 'users_name_unique'
update
It turns out that the destination field's collation is set to 'utf8_unicode_ci' and the original field is 'utf8_general_ci', changing this option solve the problem.
Here is the reason:
The destination field's collation is set to 'utf8_unicode_ci' (laravel's default collation) and the original field is 'utf8_general_ci'.
These collations have different rules of "sort" or "equal". Changing this option solved the problem.

MySQL break up left table in LEFT JOIN query for performance enhancement

I have the following MySQL query:
SELECT pool.username
FROM pool
LEFT JOIN sent ON pool.username = sent.username
AND sent.campid = 'YA1LGfh9'
WHERE sent.username IS NULL
AND pool.gender = 'f'
AND (`location` = 'united states' OR `location` = 'us' OR `location` = 'usa');
The problem is that the pool table contains millions of rows and this query takes over 12 minutes to complete. I realize that in this query, the entire left table (pool) is being scanned. The pool table has an auto incremented id row.
I would like to split this query into multiple queries so that rather than scanning the entire pool table I scan 1000 rows at a time and in the next query I would pick up where I left off (1000-2000,2000-3000) and so on using the id column to keep track.
How can I specify this in my query? Please show examples if you know the answer. Thank you.
here are my indexes if it helps:
mysql> show index from main.pool;
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+
| pool | 0 | PRIMARY | 1 | id | A | 9275039 | NULL | NULL | | BTREE | |
| pool | 1 | username | 1 | username | A | 9275039 | NULL | NULL | | BTREE | |
| pool | 1 | source | 1 | source | A | 1 | NULL | NULL | | BTREE | |
| pool | 1 | location | 1 | location | A | 38168 | NULL | NULL | | BTREE | |
| pool | 1 | pdex | 1 | gender | A | 2 | NULL | NULL | | BTREE | |
| pool | 1 | pdex | 2 | username | A | 9275039 | NULL | NULL | | BTREE | |
| pool | 1 | pdex | 3 | id | A | 9275039 | NULL | NULL | | BTREE | |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+
8 rows in set (0.00 sec)
mysql> show index from main.sent;
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+
| sent | 0 | PRIMARY | 1 | primary_key | A | 351 | NULL | NULL | | BTREE | |
| sent | 1 | username | 1 | username | A | 175 | NULL | NULL | | BTREE | |
| sent | 1 | sdex | 1 | campid | A | 7 | NULL | NULL | | BTREE | |
| sent | 1 | sdex | 2 | username | A | 351 | NULL | NULL | | BTREE | |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+
and here is the explain for my query:
----------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------+---------------+------+---------+-------+---------+--------------------------------------+
| 1 | SIMPLE | pool | ref | location,pdex | pdex | 5 | const | 6084332 | Using where |
| 1 | SIMPLE | sent | index | sdex | sdex | 309 | NULL | 351 | Using where; Using index; Not exists |
+----+-------------+-------+-------+---------------+------+---------+-------+---------+--------------------------------------+
here is the structure of the pool table:
| pool | CREATE TABLE `pool` (
`id` int(20) NOT NULL AUTO_INCREMENT,
`username` varchar(50) CHARACTER SET utf8 NOT NULL,
`source` varchar(10) CHARACTER SET utf8 NOT NULL,
`gender` varchar(1) CHARACTER SET utf8 NOT NULL,
`location` varchar(50) CHARACTER SET utf8 NOT NULL,
PRIMARY KEY (`id`),
KEY `username` (`username`),
KEY `source` (`source`),
KEY `location` (`location`),
KEY `pdex` (`gender`,`username`,`id`)
) ENGINE=MyISAM AUTO_INCREMENT=9327026 DEFAULT CHARSET=latin1 |
here is the structure of the sent table:
| sent | CREATE TABLE `sent` (
`primary_key` int(50) NOT NULL AUTO_INCREMENT,
`username` varchar(50) NOT NULL,
`from` varchar(50) NOT NULL,
`campid` varchar(255) NOT NULL,
`timestamp` int(20) NOT NULL,
PRIMARY KEY (`primary_key`),
KEY `username` (`username`),
KEY `sdex` (`campid`,`username`)
) ENGINE=MyISAM AUTO_INCREMENT=352 DEFAULT CHARSET=latin1 |
This produces a syntax error but this WHERE clause in the beginning is what im after:
SELECT pool.username
FROM pool
WHERE id < 1000
LEFT JOIN sent ON pool.username = sent.username
AND sent.campid = 'YA1LGfh9'
WHERE sent.username IS NULL
AND pool.gender = 'f'
AND (location = 'united states' OR location = 'us' OR location = 'usa');
Splitting your query does not sound like the right approach.
The better way would be to fetch some records from your existing query, send your messages and then continue fetching.
Your query could benefit from another compound index on
pool( location, gender, username )
This should allow to run your complete query from sdex and your new index.
If you really want to split the query, an easy approach could be to
SELECT MIN(id), MAX(id) FROM pool
and then loop from min to max in steps of 1000, and add id >= r AND id < r+1000 to your query.
This could return 0 rows if you have gaps, but it would never return more than 1000 rows at once. A different compound index on pool including (id, location, gender and maybe username) could help for this query.
Looks like its using pool.location
Could try adding an index on gender, might no be a lot of help though.
Rationalising location to a country code in your data, and indexing that would probably be useful.
But the first index to add looks to be campid to me, that could make a serious dent in the number of records it has to test.

Optimizing query in the MySQL slow-query log

Our database is set up so that we have a credentials table that hold multiple different types of credentials (logins and the like). There's also a credential_pairs table that associates some of these types together (for instance, a user may have a password and security token).
In an attempt to see if a pair match, there is the following query:
SELECT DISTINCT cp.credential_id FROM credential_pairs AS cp
INNER JOIN credentials AS c1 ON (cp.primary_credential_id = c1.credential_id)
INNER JOIN credentials AS c2 ON (cp.secondary_credential_id = c2.credential_id)
WHERE c1.data = AES_ENCRYPT('Some Value 1', 'encryption key')
AND c2.data = AES_ENCRYPT('Some Value 2', 'encryption key');
This query works fine and gives us exactly what we need. HOWEVER, it is constantly showing in the slow query log (possibly due to lack of indexes?). When I ask MySQL to "explain" the query it gives me:
+----+-------------+-------+------+--------------------------------------------------------+---------------------+---------+-------+-------+--------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+------+--------------------------------------------------------+---------------------+---------+-------+-------+--------------------------------+
| 1 | SIMPLE | c1 | ref | credential_id_UNIQUE,credential_id,ix_credentials_data | ix_credentials_data | 22 | const | 1 | Using where; Using temporary |
| 1 | SIMPLE | c2 | ref | credential_id_UNIQUE,credential_id,ix_credentials_data | ix_credentials_data | 22 | const | 1 | Using where |
| 1 | SIMPLE | cp | ALL | NULL | NULL | NULL | NULL | 69197 | Using where; Using join buffer |
+----+-------------+-------+------+--------------------------------------------------------+---------------------+---------+-------+-------+--------------------------------+
I have a feeling that last entry (where it shows 69197 rows) is probably the problem, but I am FAR from a DBA... help?
credentials table:
CREATE TABLE `credentials` (
`hidden_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`credential_id` varchar(255) NOT NULL,
`data` blob NOT NULL,
`credential_status` varchar(100) NOT NULL,
`insert_date` datetime NOT NULL,
`insert_user` int(10) unsigned NOT NULL,
`update_date` datetime DEFAULT NULL,
`update_user` int(10) unsigned DEFAULT NULL,
`delete_date` datetime DEFAULT NULL,
`delete_user` int(10) unsigned DEFAULT NULL,
`is_deleted` tinyint(1) NOT NULL DEFAULT '0',
PRIMARY KEY (`hidden_id`,`credential_id`),
UNIQUE KEY `credential_id_UNIQUE` (`credential_id`),
KEY `credential_id` (`credential_id`),
KEY `data` (`data`(10)),
KEY `credential_status` (`credential_status`(10))
) ENGINE=InnoDB AUTO_INCREMENT=1572 DEFAULT CHARSET=utf8;
credential_pairs Table:
CREATE TABLE `credential_pairs` (
`hidden_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`credential_id` varchar(255) NOT NULL,
`primary_credential_id` varchar(255) NOT NULL,
`secondary_credential_id` varchar(255) NOT NULL,
`is_deleted` tinyint(1) DEFAULT NULL,
PRIMARY KEY (`hidden_id`,`credential_id`),
KEY `primary_credential_id` (`primary_credential_id`(10)),
KEY `secondary_credential_id` (`secondary_credential_id`(10))
) ENGINE=InnoDB AUTO_INCREMENT=500 DEFAULT CHARSET=latin1;
credentials Indexes:
+-------------+------------+----------------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+-------------+------------+----------------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+
| credentials | 0 | PRIMARY | 1 | hidden_id | A | 186235 | NULL | NULL | | BTREE | |
| credentials | 0 | PRIMARY | 2 | credential_id | A | 186235 | NULL | NULL | | BTREE | |
| credentials | 0 | credential_id_UNIQUE | 1 | credential_id | A | 186235 | NULL | NULL | | BTREE | |
| credentials | 1 | credential_id | 1 | credential_id | A | 186235 | NULL | NULL | | BTREE | |
| credentials | 1 | ix_credentials_data | 1 | data | A | 186235 | 20 | NULL | | BTREE | |
+-------------+------------+----------------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+
credential_pair Indexes:
+------------------+------------+---------------------------------------------+--------------+-------------------------+-----------+-------------+----------+--------+------+------------+---------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+------------------+------------+---------------------------------------------+--------------+-------------------------+-----------+-------------+----------+--------+------+------------+---------+
| credential_pairs | 0 | PRIMARY | 1 | hidden_id | A | 69224 | NULL | NULL | | BTREE | |
| credential_pairs | 0 | PRIMARY | 2 | credential_id | A | 69224 | NULL | NULL | | BTREE | |
| credential_pairs | 1 | ix_credential_pairs_credential_id | 1 | credential_id | A | 69224 | 36 | NULL | | BTREE | |
| credential_pairs | 1 | ix_credential_pairs_primary_credential_id | 1 | primary_credential_id | A | 69224 | 36 | NULL | | BTREE | |
| credential_pairs | 1 | ix_credential_pairs_secondary_credential_id | 1 | secondary_credential_id | A | 69224 | 36 | NULL | | BTREE | |
+------------------+------------+---------------------------------------------+--------------+-------------------------+-----------+-------------+----------+--------+------+------------+---------+
UPDATE NOTES:
AFAICT: The DISTINCT was superfluous... nothing really needed it, so I dropped it. In an attempt to follow Fabrizio's advice to get a where on the credential_pairs lookup I then altered the statement to read as:
SELECT credential_id
FROM credential_pairs cp
WHERE cp.primary_credential_id = (SELECT credential_id FROM credentials WHERE data = AES_ENCRYPT('value 1','enc_key')) AND
cp.secondary_credential_id = (SELECT credential_id FROM credentials WHERE data = AES_ENCRYPT('value 2','enc_key'))
And.... nothing. The statement takes just as long and the explain looks pretty much the same. So, I added an index to the primary and secondary columns with:
ALTER TABLE credential_pairs ADD INDEX `idx_credential_pairs__primary_and_secondary`(`primary_credential_id`, `secondary_credential_id`);
And... nothing.
+----+-------------+-------------+-------+---------------------+---------------------------------------------+---------+------+-------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------------+-------+---------------------+---------------------------------------------+---------+------+-------+--------------------------+
| 1 | PRIMARY | cp | index | NULL | idx_credential_pairs__primary_and_secondary | 514 | NULL | 69217 | Using where; Using index |
| 3 | SUBQUERY | credentials | ref | ix_credentials_data | ix_credentials_data | 22 | | 1 | Using where |
| 2 | SUBQUERY | credentials | ref | ix_credentials_data | ix_credentials_data | 22 | | 1 | Using where |
+----+-------------+-------------+-------+---------------------+---------------------------------------------+---------+------+-------+--------------------------+
It says it's using the index, but it still looks like it's table scanning. So, I added a joint key (as per a'r's comment below) with:
ALTER TABLE credential_pairs ADD KEY (primary_credential_id, secondary_credential_id);
And... same result as with the index (are these functionally the same?).
The DISTINCT is what is generating the "Use temporary", you usually want to avoid those when possible
Plus you are scanning the whole credential_pair table as you do not have any conditions against it so no indexes are used and the whole table is returned before applying the WHERE
hope it makes sense
EDIT/ADD
Try by starting from a different table, if I understand correctly, you have Table A, a Table B and a Table AB and you are starting the select from AB, try to start it from A
I haven't tested this, but you could try:
SELECT cp.credential_id
FROM credentials AS c1
LEFT JOIN credential_pairs AS cp ON (c1.credential_id = cp.primary_credential_id)
LEFT JOIN credentials AS c2 ON (cp.secondary_credential_id = c2.credential_id)
WHERE
c1.data = AES_ENCRYPT('Some Value 1', 'encryption key')
AND c2.data = AES_ENCRYPT('Some Value 2', 'encryption key');
I have had luck in the past by moving select tables around