I usually pride myself to be a database pro but I can't really wrap my head around this behavior. I hope someone can explain how this is working.
I have two mysql tables orders:
CREATE TABLE `orders` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`status` tinyint(4) NOT NULL,
`total` decimal(7,2) NOT NULL,
`date_created` datetime NOT NULL,
`date_updated` datetime NOT NULL,
`voucher_code` varchar(127) DEFAULT NULL,
`voucher_id` int(11) unsigned DEFAULT NULL,
`user_id` int(11) unsigned DEFAULT NULL,
`billing_address_id` int(11) unsigned NOT NULL,
`shipping_address_id` int(11) unsigned NOT NULL,
`reference_id` varchar(45) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `reference_id` (`reference_id`),
KEY `address_id` (`billing_address_id`)
) ENGINE=InnoDB AUTO_INCREMENT=168067 DEFAULT CHARSET=latin1;
and addresses:
CREATE TABLE `addresses` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`title` tinyint(4) DEFAULT NULL,
`first_name` varchar(255) NOT NULL,
`last_name` varchar(255) NOT NULL,
`street` varchar(255) NOT NULL,
`street2` varchar(255) DEFAULT NULL,
`company_name` varchar(255) DEFAULT NULL,
`city` varchar(45) NOT NULL,
`postcode` varchar(45) DEFAULT NULL,
`region` varchar(45) DEFAULT NULL,
`country` varchar(45) NOT NULL,
`phone` varchar(45) DEFAULT NULL,
`user_id` int(11) unsigned DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `fk_addresses_users1_idx` (`user_id`)
) ENGINE=InnoDB AUTO_INCREMENT=95277 DEFAULT CHARSET=latin1;
Now as you can see I have created an index inside the orders table for the billing_address_id called address_id that should match with the address id.
This is the query I am trying to run:
SELECT
o.id, a.first_name, a.last_name, o.total, o.date_created
FROM
orders o USE INDEX FOR JOIN (PRIMARY) JOIN
addresses a ON a.id = o.billing_address_id
ORDER BY id DESC
LIMIT 0, 50
If I run the query without any index specification it will pickup and use the address_id index which I would expect be the fastest way to match the two tables.
Strangely enough with the 'address_id' index the query runs in 2 seconds.
If i use the normal 'PRIMARY' index which works on the order id it takes 0.000 seconds.
This is bugging me out. I thought I was supposed to create indexes to expedite the joining process between tables.
If I run EXPLAIN on the two queries I get:
EXPLAIN EXTENDED
SELECT o.id, a.first_name, a.last_name, o.total, o.date_created
FROM orders o
JOIN addresses a ON a.id = o.billing_address_id
ORDER BY id DESC
LIMIT 0, 50
1 SIMPLE a ALL PRIMARY 95234 100.00 Using temporary; Using filesort
1 SIMPLE o ref address_id address_id 4 my_basket.a.id 1 100.00
With the index:
EXPLAIN EXTENDED
SELECT o.id, a.first_name, a.last_name, o.total, o.date_created
FROM orders o USE INDEX FOR
JOIN (PRIMARY)
JOIN addresses a ON a.id = o.billing_address_id
ORDER BY id DESC
LIMIT 0, 50
1 SIMPLE o index PRIMARY 4 50 332632.00
1 SIMPLE a eq_ref PRIMARY PRIMARY 4 my_basket.o.billing_address_id 1 100.00
Thank you for finding the time to answer this question.
For ORDER BY ... LIMIT queries it will often be beneficial to use a query execution plan that avoids sorting. This is not necessarily because the sorting is expensive, but because it makes it possible to stop the query execution once the number of requested rows (here 50) are found.
In your case, if one starts with table a, the full join result will have to be generated before selecting the "top" 50 rows. If you start with scanning table o using the PRIMARY index, the join result will be sorted on o.id, and the join execution can stop once 50 rows have been found.
The cost model used to select between the two approaches has been improved since MySQL 5.6. I suggest you try out MySQL 5.7 to see if the MySQL optimizer is now able to select the most optimal plan.
I'm surprised that the two queries even compile -- ORDER BY id is ambiguous since each table has a different id.
When doing a JOIN, always qualify all columns.
Meanwhile, remove the USE INDEX.
Related
I have 3 simple tables
Invoices ( ~500k records )
Invoice items, one-to-many relation to invoices ( ~10 million records )
Invoice payments, one-to-many relation to invoices ( ~700k records )
Now, as simple as it sounds, I need to query for unpaid invoices.
Here is the query I am using:
select * from invoices
LEFT JOIN (SELECT invoice_id, SUM(price) as totalAmount
FROM invoice_items
GROUP BY invoice_id) AS t1
ON t1.invoice_id = invoices.id
LEFT JOIN (SELECT invoice_id, SUM(payed_amount) as totalPaid
FROM invoice_payment_transactions
GROUP BY invoice_id) AS t2
ON t2.invoice_id = invoices.id
WHERE totalAmount > totalPaid
Unfortunately, this query takes around 30 seconds, so way to slow.
Of course I have indexes set for "invoice_id" on both payments and items.
When I "EXPLAIN" the query, I can see that mysql has to do a full table scan.
I also tried several other query approaches, using "EXISTS" or "IN" with subqueries, but I never got around the full table scan.
Pretty sure there is not much that can be done here ( except use some caching approach ), but maybe someone knows how to optimize this ?
I need this query to run in a +/-2 seconds max.
EDIT:
Thanks to everybody for trying. Please just know that I absolutely know how to adopt different caching strategies here, but this question is purely about optimizing this query !
Here are the ( simplified ) table definitions
CREATE TABLE `invoices`
(
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`created_at` timestamp NOT NULL DEFAULT current_timestamp(),
`date` date NOT NULL,
`title` enum ('M','F','Other') DEFAULT NULL,
`first_name` varchar(191) DEFAULT NULL,
`family_name` varchar(191) DEFAULT NULL,
`street` varchar(191) NOT NULL,
`postal_code` varchar(10) NOT NULL,
`city` varchar(191) NOT NULL,
`country` varchar(2) NOT NULL,
PRIMARY KEY (`id`),
KEY `date` (`date`)
) ENGINE = InnoDB
CREATE TABLE `invoice_items`
(
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`invoice_id` bigint(20) unsigned NOT NULL,
`created_at` timestamp NOT NULL DEFAULT current_timestamp(),
`name` varchar(191) DEFAULT NULL,
`description` text DEFAULT NULL,
`reference` varchar(191) DEFAULT NULL,
`quantity` smallint(6) NOT NULL,
`price` int(11) NOT NULL,
PRIMARY KEY (`id`),
KEY `invoice_items_invoice_id_index` (`invoice_id`),
) ENGINE = InnoDB
CREATE TABLE `invoice_payment_transactions`
(
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`invoice_id` bigint(20) unsigned NOT NULL,
`created_at` timestamp NOT NULL DEFAULT current_timestamp(),
`transaction_identifier` varchar(191) NOT NULL,
`payed_amount` mediumint(9) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `invoice_payment_transactions_invoice_id_index` (`invoice_id`),
) ENGINE = InnoDB
Plan A:
Summary table by invoice_id and day. (as Bill suggested) Summary Tables
Plan B:
Change the design to be "current" and "history". This is where the "payments" is a "history" of money changing hands. Meanwhile "invoices" would be "current" in that it contains a "balance_owed" column. This is a philosophy change; it could (should) be encapsulated in a client subroutine and/or a database Stored Procedure.
Plan C: This may be useful if "most" of the invoices are paid off.
Have a flag in the invoices table to indicate paid-off. That will prevent "most" of the JOINs from occurring. (Well, adding that column is just as hard as doing Plan B.)
I have the following query which filters the row with replyAutoId=0 and then fetches the most recent record of each propertyId. Now the query takes 0.23225 sec for fetching just 5,435 from 21,369 rows and I want to improve this. All I am asking is, Is there a better way of writing this query ? Any suggestions ?
SELECT pc1.* FROM (SELECT * FROM propertyComment WHERE replyAutoId=0) as pc1
LEFT JOIN propertyComment as pc2
ON pc1.propertyId= pc2.propertyId AND pc1.updatedDate < pc2.updatedDate
WHERE pc2.propertyId IS NULL
The SHOW CREATE TABLE propertyComment Output:
CREATE TABLE `propertyComment` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`propertyId` int(11) NOT NULL,
`agentId` int(11) NOT NULL,
`comment` longtext COLLATE utf8_unicode_ci NOT NULL,
`replyAutoId` int(11) NOT NULL,
`updatedDate` datetime NOT NULL,
`contactDate` date NOT NULL,
`status` enum('Y','N') COLLATE utf8_unicode_ci NOT NULL DEFAULT 'N',
`clientStatusId` int(11) NOT NULL,
`adminsId` int(11) NOT NULL,
PRIMARY KEY (`id`),
KEY `propertyId` (`propertyId`),
KEY `agentId` (`agentId`),
KEY `status` (`status`),
KEY `adminsId` (`adminsId`),
KEY `replyAutoId` (`replyAutoId`)
) ENGINE=MyISAM AUTO_INCREMENT=21404 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
Try to get rid of the nested query.
The following query should give the same result as your original query:
SELECT pc1.*
FROM propertyComment AS pc1
LEFT JOIN propertyComment AS pc2
ON pc1.propertyID = pc2.propertyId AND pc1.updatedDate < pc2.updatedDate
WHERE pc1.replyAutoId = 0 AND pc2.propertyID IS NULL
SELECT pc1.* FROM (SELECT * WHERE replyAutoId=0) as pc1
LEFT JOIN (SELECT propertyID, updatedDate from propertyComment order by 1,2) as pc2
ON pc1.propertyId= pc2.propertyId AND pc1.updatedDate < pc2.updatedDate
WHERE pc2.propertyId IS NULL
You also don't have any indexes?
If you did on primary key, you're not joining on it, so why include it?
Why not only select the columns you're interested from B table? This will limit the number of columns you're selecting from table B. Since you're pulling everything from table A where replyAutoID = 0, it wouldn't make much sense to limit the columns there. This should speed it up little.
I've created the following table:
CREATE TABLE `clicks_summ` (
`dt` INT(7) UNSIGNED NOT NULL,
`banner` SMALLINT(6) UNSIGNED NOT NULL,
`client` SMALLINT(6) UNSIGNED NOT NULL,
`channel` SMALLINT(6) UNSIGNED NOT NULL,
`cnt` INT(11) UNSIGNED NOT NULL,
`lpid` INT(11) NULL DEFAULT NULL,
UNIQUE INDEX `dt` (`dt`, `banner`, `client`, `channel`, `lpid`),
INDEX `banner` (`banner`),
INDEX `channel` (`channel`),
INDEX `client` (`client`),
INDEX `lpid` (`lpid`),
INDEX `cnt` (`cnt`)
)
COLLATE='utf8_unicode_ci'
ENGINE=InnoDB;
and i am using following query to fetch rows/records from this table:
select client, sum(cnt) cnt
from clicks_summ cs
group by client;
and it's awful! It takes about a second to perform this query. EXPLAIN shows me
So, the question is: how I can speed up this query? I've tried indexing this table on different fields without any reasonable success. Now there are 331 036 rows in this table, I guess, is not so big.
Try craete INDEX client_cnt(client, cnt). Another way to make query faster is upgrade your hardware:)
one cool thing
if you always have 5 col in your where clause when a grouped index of 5 col will outperform 5 individual indexes ;)
So basically I have three tables:
CREATE TABLE `cdIPAddressToLocation` (
`IPADDR_FROM` int(10) unsigned NOT NULL COMMENT 'Low end of the IP Address block',
`IPADDR_TO` int(10) unsigned NOT NULL COMMENT 'High end of the IP Address block',
`IPLOCID` int(10) unsigned NOT NULL COMMENT 'The Location ID for the IP Address range',
PRIMARY KEY (`IPADDR_TO`),
KEY `Index_2` USING BTREE (`IPLOCID`),
KEY `Index_3` USING BTREE (`IPADDR_FROM`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
CREATE TABLE `cdIPLocation` (
`IPLOCID` int(10) unsigned NOT NULL default '0',
`Country` varchar(4) default NULL,
`Region` int(10) unsigned default NULL,
`City` varchar(90) default NULL,
`PostalCode` varchar(10) default NULL,
`Latitude` float NOT NULL,
`Longitude` float NOT NULL,
`MetroCode` varchar(4) default NULL,
`AreaCode` varchar(4) default NULL,
`State` varchar(45) default NULL,
`Continent` varchar(10) default NULL,
PRIMARY KEY (`IPLOCID`)
) ENGINE=MyISAM AUTO_INCREMENT=218611 DEFAULT CHARSET=latin1;
and
CREATE TABLE 'data'{
'IP' varchar(50)
'SCORE' int
}
My task is to join these three tables and find the location data for given IP address.
My query is as follows:
select
t.ip,
l.Country,
l.State,
l.City,
l.PostalCode,
l.Latitude,
l.Longitude,
t.score
from
(select
ip, inet_aton(ip) ipv, score
from
data
order by score desc
limit 5) t
join
cdIPAddressToLocation a ON t.ipv between a.IPADDR_FROM and a.IPADDR_TO
join
cdIPLocation l ON l.IPLOCID = a.IPLOCID
While this query works, it's very very slow, it took about 100 seconds to return the result on my dev box.
I'm using mysql 5.1, the cdIPAddressToLocation has 5.9 million rows and cdIPLocation table has about 0.3 million rows.
When I check the execution plan, I found it's not using any index in the table 'cdIPAddressToLocation', so for each row in the 'data' table it would do a full table scan against table 'cdIPAddressToLocation'.
It is very weird to me. I mean since there are already two indexes in table 'cdIPAddressToLocation' on columns 'IPADDR_FROM' and 'IPADDR_TO', the execution plan should exploit the index to improve the performance, but why it didn't use them.
Or was there something wrong with my query?
Please help, thanks a lot.
Have you tried using a composite index on the columns cdIPAddressToLocation.IPADDR_FROM and cdIPAddressToLocation.IPADDR_TO?
Multiple-Column Indexes
Using trial and error i've discovered that when removing a join from the below query it runs around 30 times quicker. Can someone explain why this would be and if it's possible to optimise the query to include the additional join without the performance hit.
This is a screenshot of the explain which shows that the index isn't being used for the uesr_groups table.
http://i.imgur.com/9VDuV.png
This is the original query:
SELECT `comments`.`comment_id`, `comments`.`comment_html`, `comments`.`comment_time_added`, `comments`.`comment_has_attachments`, `users`.`user_name`, `users`.`user_id`, `users`.`user_comments_count`, `users`.`user_time_registered`, `users`.`user_time_last_active`, `user_profile`.`user_avatar`, `user_profile`.`user_signature_html`, `user_groups`.`user_group_icon`, `user_groups`.`user_group_name`
FROM (`comments`)
INNER JOIN `users` ON `comments`.`comment_user_id` = `users`.`user_id`
INNER JOIN `user_profile` ON `users`.`user_id` = `user_profile`.`user_id`
INNER JOIN `user_groups` ON `users`.`user_group_id` = `user_groups`.`user_group_id`
WHERE `comments`.`comment_enabled` = 1
AND `comments`.`comment_content_id` = 12
ORDER BY `comments`.`comment_time_added` ASC
LIMIT 20
If I remove the "user_groups" join then the query runs 30 times quicker as mentioned above.
SELECT `comments`.`comment_id`, `comments`.`comment_html`, `comments`.`comment_time_added`, `comments`.`comment_has_attachments`, `users`.`user_name`, `users`.`user_id`, `users`.`user_comments_count`, `users`.`user_time_registered`, `users`.`user_time_last_active`, `user_profile`.`user_avatar`, `user_profile`.`user_signature_html`
FROM (`comments`)
INNER JOIN `users` ON `comments`.`comment_user_id` = `users`.`user_id`
INNER JOIN `user_profile` ON `users`.`user_id` = `user_profile`.`user_id`
WHERE `comments`.`comment_enabled` = 1
AND `comments`.`comment_content_id` = 12
ORDER BY `comments`.`comment_time_added` ASC
LIMIT 20
My tables are below, can anyone offer any insight into how to avoid a performance hit for including the user_groups table?
--
-- Table structure for table `comments`
--
CREATE TABLE IF NOT EXISTS `comments` (
`comment_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`comment_content_id` int(10) unsigned NOT NULL,
`comment_user_id` mediumint(6) unsigned NOT NULL,
`comment_original` text NOT NULL,
`comment_html` text NOT NULL,
`comment_time_added` int(10) unsigned NOT NULL,
`comment_time_updated` int(10) unsigned NOT NULL,
`comment_enabled` tinyint(1) NOT NULL DEFAULT '0',
`comment_is_spam` tinyint(1) NOT NULL DEFAULT '0',
`comment_has_attachments` tinyint(1) unsigned NOT NULL,
`comment_has_edits` tinyint(1) NOT NULL,
PRIMARY KEY (`comment_id`),
KEY `comment_user_id` (`comment_user_id`),
KEY `comment_content_id` (`comment_content_id`),
KEY `comment_is_spam` (`comment_is_spam`),
KEY `comment_enabled` (`comment_enabled`),
KEY `comment_time_updated` (`comment_time_updated`),
KEY `comment_time_added` (`comment_time_added`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=352 ;
-- --------------------------------------------------------
--
-- Table structure for table `users`
--
CREATE TABLE IF NOT EXISTS `users` (
`user_id` mediumint(6) unsigned NOT NULL AUTO_INCREMENT,
`user_ipb_id` int(10) unsigned DEFAULT NULL,
`user_activated` tinyint(1) NOT NULL DEFAULT '0',
`user_name` varchar(64) CHARACTER SET latin1 NOT NULL,
`user_email` varchar(255) NOT NULL,
`user_password` varchar(40) NOT NULL,
`user_content_count` int(10) unsigned NOT NULL DEFAULT '0',
`user_comments_count` int(10) unsigned NOT NULL DEFAULT '0',
`user_salt` varchar(8) NOT NULL,
`user_api_key` varchar(32) NOT NULL,
`user_auth_key` varchar(32) DEFAULT NULL,
`user_paypal_key` varchar(32) DEFAULT NULL,
`user_timezone_id` smallint(3) unsigned NOT NULL,
`user_group_id` tinyint(3) unsigned NOT NULL,
`user_custom_permission_mask_id` tinyint(3) unsigned DEFAULT NULL,
`user_lang_id` tinyint(2) unsigned NOT NULL,
`user_time_registered` int(10) unsigned NOT NULL,
`user_time_last_active` int(10) unsigned NOT NULL
PRIMARY KEY (`user_id`),
UNIQUE KEY `user_email` (`user_email`),
KEY `user_group_id` (`user_group_id`),
KEY `user_auth_key` (`user_auth_key`),
KEY `user_api_key` (`user_api_key`),
KEY `user_custom_permission_mask_id` (`user_custom_permission_mask_id`),
KEY `user_time_last_active` (`user_time_last_active`),
KEY `user_paypal_key` (`user_paypal_key`),
KEY `user_name` (`user_name`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=33 ;
-- --------------------------------------------------------
--
-- Table structure for table `user_groups`
--
CREATE TABLE IF NOT EXISTS `user_groups` (
`user_group_id` tinyint(3) unsigned NOT NULL AUTO_INCREMENT,
`user_group_name` varchar(32) NOT NULL,
`user_group_permission_mask_id` tinyint(3) unsigned NOT NULL,
`user_group_icon` varchar(32) DEFAULT NULL,
PRIMARY KEY (`user_group_id`),
KEY `user_group_permission_mask_id` (`user_group_permission_mask_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=8 ;
-- --------------------------------------------------------
--
-- Table structure for table `user_profile`
--
CREATE TABLE IF NOT EXISTS `user_profile` (
`user_id` mediumint(8) unsigned NOT NULL,
`user_signature_original` text,
`user_signature_html` text,
`user_avatar` varchar(64) DEFAULT NULL,
`user_steam_id` varchar(64) DEFAULT NULL,
`user_ps_id` varchar(16) DEFAULT NULL,
`user_xbox_id` varchar(64) DEFAULT NULL,
`user_wii_id` varchar(64) DEFAULT NULL,
PRIMARY KEY (`user_id`),
KEY `user_steam_id` (`user_steam_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
Most database engines calculate their query plan based on statistics about the tables - for instance, if a table has a small number of rows, it's quicker to go to the table than the index. Those statistics are maintained during "normal" operation - e.g. inserts, updates and deletes - but can get out of sync when table definitions are changed, or when you do bulk inserts.
If you see unexpected behaviour in the query plan, you can force the database to update its statistics; in MySQL you can use Optimize Table - which does everything, including re-ordering the table itself, or Analyze Table which only updates the indices.
This is hard to do on production environments, as both operations lock the tables; if you can possibly negotiate a maintenance window, that's by far the simplest way to deal with the problem.
It's worth measuring performance of "optimize table" - on well-specified hardware, it should take only a couple of seconds for "normal" size tables (up to low millions of records, with only a few indices). That might mean you can have an "informal" maintenance window - you don't take the application off-line, you just accept that some users will have degraded performance while you're running the scripts.
MySQL has an EXPLAIN feature which will help you to understand the query:
$ mysql
> EXPLAIN SELECT `comments`.`comment_id`, `comments`.`comment_html`,`comments`.`comment_time_added`, `comments`.`comment_has_attachments`, `users`.`user_name`, `users`.`user_id`, `users`.`user_comments_count`, `users`.`user_time_registered`, `users`.`user_time_last_active`, `user_profile`.`user_avatar`, `user_profile`.`user_signature_html`
FROM (`comments`)
INNER JOIN `users` ON `comments`.`comment_user_id` = `users`.`user_id`
INNER JOIN `user_profile` ON `users`.`user_id` = `user_profile`.`user_id`
WHERE `comments`.`comment_enabled` = 1
AND `comments`.`comment_content_id` = 12
ORDER BY `comments`.`comment_time_added` ASC
LIMIT 20
MySQL might simply be missing, or skipping an index.
You can learn more about understanding the output of EXPLAIN here from the documentation (a little hard-core), or better yet from a simpler explanation here, (ignore the fact that it's on a Java site.)
More than likely the amount of data, or an outdated or incomplete index is meaning that MySQL is falsely doing a table scan. When you see table scans, or sequential serches, you can often easily see which field is missing an index, or an index which is not usable.
Could you please try this one (you can remove join with user_group ). It can be faster in case if query retrieve small data set from comments table:
SELECT
comments.comment_id, comments.comment_html, comments.comment_time_added, comments.comment_has_attachments, users.user_name, users.user_id, users.user_comments_count, users.user_time_registered, users.user_time_last_active, user_profile.user_avatar, user_profile.user_signature_html, user_groups.user_group_icon, user_groups.user_group_name
FROM
(select * from comments where comment_content_id = 12 and active = 1) comments
INNER JOIN users u ON c.comment_user_id = users.user_id
INNER JOIN user_profile ON users.user_id = user_profile.user_id
INNER JOIN user_groups ON users.user_group_id = user_groups.user_group_id
ORDER BY comments.comment_time_added ASC
LIMIT 20
Try using left joins on the non null relations.
It seems that since inner joins are always symmetric mysql will reorder the joins to use best looking (typically smallest) table first.
Since left joins aren't always symmetric mysql won't reorder them and thus you can use them to force the table order. However with a non null field left and inner are equivalent so your results won't change.
The table order will determine what indicies are used which can greatly impact performance.