I want to avoid redundant index, so what is the best composite index for these two query? Based on my understanding, these two query cannot have the same composite index since one need country, the other one need product_id, but if I make the index as below, will it be redundant index, and effect the DB performance?
combine merchant_id, created_at and product_id
combine merchant_id, created_at and country
Query 1
SELECT * from shop_order
WHERE shop_order.merchant_id = ?
AND shop_order.created_at >= TIMESTAMP(?)
AND shop_order.created_at <= TIMESTAMP(?)
AND shop_order.product_id = ?) AS mytable
WHERE product_id IS NOT NULL GROUP BY product_id, title;
Query 2
SELECT COALESCE(SUM(total_price_usd),0) AS revenue,
COUNT(*) as total_order, COALESCE(province, 'Unknown') AS name
FROM shop_order
WHERE DATE(created_at) >= '2021-02-08 13:37:42'
AND DATE(created_at) <= '2021-02-14 22:44:13'
AND merchant_id IN (18,19,20,1)
AND country = 'Malaysia' GROUP BY province;
Table structure
CREATE TABLE `shop_order` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`merchant_id` bigint(20) DEFAULT NULL,
`order_id` bigint(20) NOT NULL,
`customer_id` bigint(20) DEFAULT NULL,
`customer_orders_count` varchar(45) DEFAULT NULL,
`customer_total_spent` varchar(45) DEFAULT NULL,
`customer_email` varchar(100) DEFAULT NULL,
`customer_last_order_name` varchar(45) DEFAULT NULL,
`currency` varchar(10) NOT NULL,
`total_price` decimal(20,8) NOT NULL,
`subtotal_price` decimal(20,8) NOT NULL,
`transaction_fee` decimal(20,8) DEFAULT NULL,
`total_discount` decimal(20,8) DEFAULT '0.00000000',
`shipping_fee` decimal(20,8) DEFAULT '0.00000000',
`total_price_usd` decimal(20,8) DEFAULT NULL,
`transaction_fee_usd` decimal(20,8) DEFAULT NULL,
`country` varchar(50) DEFAULT NULL,
`province` varchar(45) DEFAULT NULL,
`processed_at` datetime DEFAULT NULL,
`refunds` json DEFAULT NULL,
`ffm_status` varchar(50) DEFAULT NULL,
`gateway` varchar(45) DEFAULT NULL,
`confirmed` tinyint(1) DEFAULT NULL,
`cancelled_at` datetime DEFAULT NULL,
`cancel_reason` varchar(100) DEFAULT NULL,
`created` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
`updated` datetime DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`order_number` bigint(1) DEFAULT NULL,
`created_at` datetime DEFAULT NULL,
`financial_status` varchar(45) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `shop_order_unique` (`merchant_id`,`order_id`),
KEY `merchant_id` (`merchant_id`),
KEY `combine_idx1` (`country`,`merchant_id`,`created_at`)
) ENGINE=InnoDB AUTO_INCREMENT=2237 DEFAULT CHARSET=utf8mb4;
Please help me
Query 1:
INDEX(merchant_id, product_id, -- Put columns for "=" tests first (in any order)
created_at) -- then range
Query 2. First, avoid hiding created_at in a function call (DATE()); it prevents using it in an index.
INDEX(country, -- "="
merchant_id, -- IN
created_at) -- range (after removing DATE)
You are correct in saying that you need separate indexes for those queries. And possibly some of your existing indexes are needed for other queries.
Also, you already have a redundant index. Drop KEY merchant_id (merchant_id), -- You have ate least one other index starting with merchant_id.
Having extra indexes is only a minor performance drag. And the hit is during INSERT or if you UPDATE any column in an index. Generally, the benefit of the 'right' index to a SELECT outweighs the hit to the writes.
Having multiple unique indexes is somewhat a burden. Do you really need id since you have a "natural" PK made up of those two columns? Check to see if other tables need to join on id.
Consider shrinking many of the datasizes. BIGINT takes 8 bytes and has a range that is rarely needed. decimal(20,8) takes 10 bytes and allows up to a trillion dollars; this seems excessive, too. Is customer_orders_count a number?
Related
I have 2 tables. The first, called stazioni, where I store live weather data from some weather station, and the second called archivio2, where are stored archived day data. The two tables have in common the ID station data (ID on stazioni, IDStazione on archvio2).
stazioni (1,743 rows)
CREATE TABLE `stazioni` (
`ID` int(10) NOT NULL,
`user` varchar(100) NOT NULL,
`nome` varchar(100) NOT NULL,
`email` varchar(50) NOT NULL,
`localita` varchar(100) NOT NULL,
`provincia` varchar(50) NOT NULL,
`regione` varchar(50) NOT NULL,
`altitudine` int(10) NOT NULL,
`stazione` varchar(100) NOT NULL,
`schermo` varchar(50) NOT NULL,
`installazione` varchar(50) NOT NULL,
`ubicazione` varchar(50) NOT NULL,
`immagine` varchar(100) NOT NULL,
`lat` double NOT NULL,
`longi` double NOT NULL,
`file` varchar(255) NOT NULL,
`url` varchar(255) NOT NULL,
`temperatura` decimal(10,1) DEFAULT NULL,
`umidita` decimal(10,1) DEFAULT NULL,
`pressione` decimal(10,1) DEFAULT NULL,
`vento` decimal(10,1) DEFAULT NULL,
`vento_direzione` decimal(10,1) DEFAULT NULL,
`raffica` decimal(10,1) DEFAULT NULL,
`pioggia` decimal(10,1) DEFAULT NULL,
`rate` decimal(10,1) DEFAULT NULL,
`minima` decimal(10,1) DEFAULT NULL,
`massima` decimal(10,1) DEFAULT NULL,
`orario` varchar(16) DEFAULT NULL,
`online` int(1) NOT NULL DEFAULT '0',
`tipo` int(1) NOT NULL DEFAULT '0',
`webcam` varchar(255) DEFAULT NULL,
`webcam2` varchar(255) DEFAULT NULL,
`condizioni` varchar(255) DEFAULT NULL,
`Data2` datetime DEFAULT NULL
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
archivio2 (2,127,347 rows)
CREATE TABLE `archivio2` (
`ID` int(10) NOT NULL,
`IDStazione` int(4) NOT NULL DEFAULT '0',
`localita` varchar(100) NOT NULL,
`temp_media` decimal(10,1) DEFAULT NULL,
`temp_minima` decimal(10,1) DEFAULT NULL,
`temp_massima` decimal(10,1) DEFAULT NULL,
`pioggia` decimal(10,1) DEFAULT NULL,
`pressione` decimal(10,1) DEFAULT NULL,
`vento` decimal(10,1) DEFAULT NULL,
`raffica` decimal(10,1) DEFAULT NULL,
`records` int(10) DEFAULT NULL,
`Data2` datetime DEFAULT NULL
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
The indexes that I set
-- Indexes for table `archivio2`
--
ALTER TABLE `archivio2`
ADD PRIMARY KEY (`ID`),
ADD KEY `IDStazione` (`IDStazione`),
ADD KEY `Data2` (`Data2`);
-- Indexes for table `stazioni`
--
ALTER TABLE `stazioni`
ADD PRIMARY KEY (`ID`),
ADD KEY `Tipo` (`Tipo`);
ALTER TABLE `stazioni` ADD FULLTEXT KEY `localita` (`localita`);
On a map, I call by a calendar the date to search data on archive2 table, by this INNER JOIN query (I put an example date):
SELECT *, c.pioggia AS rain, c.raffica AS raff, c.vento AS wind, c.pressione AS press
FROM stazioni as o
INNER JOIN archivio2 as c ON o.ID = c.IDStazione
WHERE c.Data2 LIKE '2019-01-01%'
All works fine, but the time needed to show result are really slow (4/5 seconds), even if the query execution time seems to be ok (about 0.5s/1.0s).
I tried to execute the query on PHPMyadmin, and the results are the same. Execution time quickly, but time to show result extremely slow.
EXPLAIN query result
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE o ALL PRIMARY,ID NULL NULL NULL 1743 NULL
1 SIMPLE c ref IDStazione,Data2 IDStazione 4 sccavzuq_rete.o.ID 1141 Using where
UPDATE: the query goes fine if I remove the index from 'IDStazione'. But in this way I lost all advantages and speed on other queries... why only that query become slow if I put index on that field?
In your WHERE clause
WHERE c.Data2 LIKE '2019-01-01%'
the value of Data2 must be casted to a string. No index can be used for that condition.
Change it to
WHERE c.Data2 >= '2019-01-01' AND c.Data2 < '2019-01-01' + INTERVAL 1 DAY
This way the engine should be able to use the index on (Data2).
Now check the EXPLAIN result. I would expect, that the table order is swapped and the key column will show Data2 (for c) and ID (for o).
(Fixing the DATE is the main performance solution; here is a less critical issue.)
The tables are much bigger than necessary. Size impacts disk space and, to some extent, speed.
You have 1743 stations, yet the datatype is a 32-bit (4-byte) number (INT). SMALLINT UNSIGNED would allow for 64K stations and use only 2 bytes.
Does it get really, really, hot there? Like 999999999.9 degrees? DECIMAL(10.1) takes 5 bytes; DECIMAL(4,1) takes only 3 and allows up to 999.9 degrees. DECIMAL(3,1) has a max of 99.9 and takes only 2 bytes.
What is "localita varchar(100)" doing in the big table? Seems like you could JOIN to the stations table when you need it? Removing that might cut the table size in half.
Important updade (explanation):
I realized that my query having single DESC order is 10 times slower that the same query with ASC order. The ordered field has an index. Is it normal behavior?
Original question with queries:
I have a mysql table with a few hundred of product items. It's suprising (for me) how 2 similar sql queries differs in terms of performance. I don't know why. Can you please give me a hint or explain why the difference is so huge?
This query takes 3ms:
SELECT
*
FROM
`product_items`
WHERE
(product_items.shop_active = 1)
AND (product_items.active = 1)
AND (product_items.active_category_id is not null)
AND (has_picture is not null)
AND (price_orig is not null)
AND (category_min_discount IS NOT NULL)
AND (product_items.slug is not null)
AND `product_items`.`active_category_id` IN (6797, 5926, 5806, 6852)
ORDER BY
price asc
LIMIT 1
But the following query takes already 169ms... Only difference is that the order clause contains 2 columns. "Price" value has each product, while "price top" has roughly only 1% of products.
SELECT
*
FROM
`product_items`
WHERE
(product_items.shop_active = 1)
AND (product_items.active = 1)
AND (product_items.active_category_id is not null)
AND (has_picture is not null)
AND (price_orig is not null)
AND (category_min_discount IS NOT NULL)
AND (product_items.slug is not null)
AND `product_items`.`active_category_id` IN (6797, 5926, 5806, 6852)
ORDER BY
price asc,
price_top desc
LIMIT 1
The table structure looks like this:
CREATE TABLE `product_items` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`shop_id` int(11) DEFAULT NULL,
`item_id` varchar(255) DEFAULT NULL,
`productname` varchar(255) DEFAULT NULL,
`description` text,
`url` text,
`url_hash` varchar(255) DEFAULT NULL,
`img_url` text,
`price` decimal(10,2) DEFAULT NULL,
`price_orig` decimal(10,2) DEFAULT NULL,
`discount` decimal(10,2) DEFAULT NULL,
`discount_percent` decimal(10,2) DEFAULT NULL,
`manufacturer` varchar(255) DEFAULT NULL,
`delivery_date` varchar(255) DEFAULT NULL,
`categorytext` text,
`created_at` datetime NOT NULL,
`updated_at` datetime NOT NULL,
`active_category_id` int(11) DEFAULT NULL,
`shop_active` int(11) DEFAULT NULL,
`active` int(11) DEFAULT '0',
`price_top` decimal(10,2) NOT NULL DEFAULT '0.00',
`attention_priority` int(11) DEFAULT NULL,
`attention_priority_over` int(11) DEFAULT NULL,
`has_picture` varchar(255) DEFAULT NULL,
`size` varchar(255) DEFAULT NULL,
`category_min_discount` int(11) DEFAULT NULL,
`slug` varchar(255) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `index_product_items_on_url_hash` (`url_hash`),
KEY `index_product_items_on_shop_id` (`shop_id`),
KEY `index_product_items_on_active_category_id` (`active_category_id`),
KEY `index_product_items_on_productname` (`productname`),
KEY `index_product_items_on_price` (`price`),
KEY `index_product_items_on_discount_percent` (`discount_percent`),
KEY `index_product_items_on_price_top` (`price_top`)
) ENGINE=InnoDB AUTO_INCREMENT=1715708 DEFAULT CHARSET=utf8;
UPDATE
I realized that the difference is mainly in the type of ordering: if I use asc+asc for both columns the query takes around 6ms, if I use asc+desc or desc+asc, the query takes around 160ms..
Thank you.
If creating an index to help ORDER BY doesn't help, try creating an index that helps both WHERE and ORDER BY:
CREATE INDEX product_items_i1 ON product_items (
shop_active,
active,
active_category_id,
has_picture,
price_orig,
category_min_discount,
slug,
price,
price_top DESC
)
Obviously, this is a bit clunky, and you'll have to balance the performance gain for the query with the price of maintaining the index.
I have a table Mysql fiddle with about 500k records.
CREATE TABLE IF NOT EXISTS `p_transactions` (
`transaction_id` bigint(10) unsigned NOT NULL,
`amount` decimal(19,2) NOT NULL,
`dt` bigint(1) NOT NULL,
`transaction_status` int(1) NOT NULL,
`transaction_type` varchar(15) NOT NULL,
`payment_method` varchar(25) NOT NULL,
`notes` text NOT NULL,
`member_id` int(10) unsigned NOT NULL,
`new_amount` decimal(19,2) NOT NULL,
`paid_amount` decimal(19,2) NOT NULL,
`secret_code` char(40) NOT NULL,
`internal_status` varchar(40) NOT NULL,
`ip_addr` varchar(15) NOT NULL,
`description` text NOT NULL,
`seller_transaction_id` varchar(50) DEFAULT NULL,
`return_url` varchar(255) DEFAULT NULL,
`fail_url` varchar(255) DEFAULT NULL,
`success_url` varchar(255) DEFAULT NULL,
`result_url` varchar(255) DEFAULT NULL,
`user_fee` decimal(19,3) DEFAULT '0.000',
`currency` char(255) DEFAULT 'USD',
`gateway_transaction_id` char(255) DEFAULT NULL,
`load_amount` decimal(19,2) NOT NULL,
`transaction_mode` varchar(1) NOT NULL DEFAULT '',
`p_fee` decimal(19,2) NOT NULL,
`country` varchar(2) NOT NULL,
`email` varchar(255) NOT NULL,
`vat` decimal(19,2) NOT NULL DEFAULT '0.00',
`name` varchar(255) NOT NULL,
`bdate` varchar(255) NOT NULL,
`child_method` varchar(255) NOT NULL,
`processing_fee` decimal(19,2) NOT NULL DEFAULT '0.00',
`flat_fee` varchar(1) NOT NULL DEFAULT 'n',
`user_fee_sum` decimal(19,2) NOT NULL DEFAULT '0.00',
`p_fee_sum` decimal(19,2) NOT NULL DEFAULT '0.00',
`dt_open` bigint(1) NOT NULL DEFAULT '0',
`user_fee_type` varchar(1) NOT NULL DEFAULT 'r',
`custom_gateway_fee` decimal(19,2) NOT NULL DEFAULT '0.00',
`paid_currency` varchar(3) NOT NULL DEFAULT 'USD',
`paid_microtime` bigint(10) unsigned NOT NULL,
`check_ballance` varchar(1) NOT NULL DEFAULT 'n',
PRIMARY KEY (`transaction_id`),
KEY `member_id` (`member_id`),
KEY `payment_method` (`payment_method`),
KEY `child_method` (`child_method`),
KEY `check_ballance` (`check_ballance`),
KEY `dt` (`dt`),
KEY `transaction_type` (`transaction_type`),
KEY `paid_microtime` (`paid_microtime`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
When I execute query
SELECT *
FROM `p_transactions`
WHERE dt >= 1517443200
AND dt <= 1523404799
AND member_id = 2051
ORDER BY `paid_microtime` DESC
LIMIT 50;
it runs 0,000 sec. (+ 0,016 sec. network)
but if I add to query this condition AND transaction_status = 7
SELECT *
FROM `p_transactions`
WHERE dt >= 1517443200
AND dt <= 1523404799
AND member_id = 2051
AND transaction_status = 7
ORDER BY `paid_microtime` DESC
LIMIT 50
query run 12,938 sec. (+ 0,062 sec. network)
Please help me to find out the reason of such behavior
PS. There was index on transaction_status and it increased execution time even more.
Add a suitable index, such as:
ON payzoff_transactions (member_id, dt)
or
ON payzoff_transactions (member_id, dt, transaction_status)
We want member_id column as the leading column in the index, because of the equality comparison, and we expect the result to be a substantially smaller subset of the entire table. We want dt column after that, because of the "range scan" on that.
Including additional columns in the index may allow MySQL to check that condition using values from the index, without a visit/lookup of the row in the underlying table pages.
Either of these indexes would be suitable for both of the queries shown in the question.
Use EXPLAIN to see the execution plan... which index is being used.
There's really no getting around the "Using filesort" operation, since we're pulling a small subset of the entire table.
(If we were pulling the entire table (or a huge subset), we might be able to avoid an expensive sort operation with an access plan that pulls rows in reverse index order, with that has an index with leading column of paid_microtime.)
For the original query have these
INDEX(member_id, dt)
INDEX(member_id, paid_microtime)
For the secondary query, have
INDEX(transaction_status, member_id, dt)
INDEX(transaction_status, member_id, paid_microtime)
Without getting into the details of the distribution of the data values, we cannot explain why one query so much slower; however, my 4 indexes should make both queries run faster most of the time.
More discussion of how I came up with those indexes (and why (member_id, dt, transaction_status) is not so good): http://mysql.rjweb.org/doc.php/index_cookbook_mysql
trying to query a large table (senddb.order_histories) that has close to 50M rows and this is the MySQL query I am using:
FIRST APPROACH- inner join:
select a.id,
a.order_number,
a.sku_id,
a.fulfillment_status,
a.modified_by,
a.created_at,
a.updated_at
from senddb.order_line_items a
inner join (
select order_line_item_id,
order_number,
order_status,
order_status_description,
action,
modified_by,
created_at,
max(updated_at) as updated_at
from senddb.order_histories
where order_status in ('x','y','z')
and fulfillment_location = 'abcd'
group by order_line_item_id) as b
on a.id = b.order_line_item_id
and a.fulfillment_status = '2';
EXPLAIN output :
SECOND APPROACH- nested select:
select a.id,
a.order_number,
a.sku_id,
a.fulfillment_status,
a.modified_by,
a.created_at,
a.updated_at
from senddb.order_line_items a
where a.fulfillment_status = '2'
and a.id in (
select b.order_line_item_id from(
select order_line_item_id,
order_number,
order_status,
order_status_description,
action,
modified_by,
created_at,
max(updated_at) as updated_at
from senddb.order_histories
where
order_status in ('x','y','z')
and fulfillment_location = 'abcd'
group by order_line_item_id) as b);
I believe nested select is a bad approach on large data but i anyhow added it here because it worked on my sample set. Anyway both the queries eventually time out after 600 seconds with the message : Error Code: 2013. Lost connection to MySQL server during query.
I would like to know if there are any ways to alter the query to make it run faster. I have already tried reducing the columns in the inner select / inner join but that should not really be an issue IMO. I also looked up a solution that says "create a clustered index" but i wasn't really able to follow. Any help is appreciated.
TABLE order_histories :
order_histories CREATE TABLE `order_histories` (
`id` int(4) unsigned NOT NULL AUTO_INCREMENT,
`order_number` varchar(24) DEFAULT NULL,
`order_status_description` varchar(255) DEFAULT NULL,
`datetime_stamp` datetime DEFAULT NULL,
`action` varchar(32) DEFAULT NULL,
`fulfillment_location` int(8) DEFAULT NULL,
`order_status` int(8) DEFAULT NULL,
`user_id` int(8) DEFAULT NULL,
`created_at` datetime DEFAULT NULL,
`updated_at` datetime DEFAULT NULL,
`modified_by` varchar(32) DEFAULT NULL,
`order_line_item_id` int(11) DEFAULT NULL,
`pooled` tinyint(1) DEFAULT '0',
PRIMARY KEY (`id`),
KEY `order_histories_ecash_idx` (`order_number`),
KEY `order_line_item_id` (`order_line_item_id`)
) ENGINE=InnoDB AUTO_INCREMENT=454738178 DEFAULT CHARSET=latin1
TABLE order_line_items :
order_line_items CREATE TABLE `order_line_items` (
`id` int(4) unsigned NOT NULL AUTO_INCREMENT,
`order_number` varchar(24) DEFAULT NULL,
`sku_id` int(8) DEFAULT NULL,
`original_price` float DEFAULT NULL,
`dept_description` varchar(100) DEFAULT NULL,
`description` varchar(100) DEFAULT NULL,
`quantity_ordered` int(8) DEFAULT NULL,
`gift_indicator` char(1) DEFAULT NULL,
`gift_wrap_flag` char(1) DEFAULT NULL,
`shipping_record_flag` char(1) DEFAULT NULL,
`gift_comments` varchar(100) DEFAULT NULL,
`item_status` char(1) DEFAULT NULL,
`tax_amount` float DEFAULT NULL,
`tax_rate` float DEFAULT NULL,
`upc` varchar(20) DEFAULT NULL,
`final_price` float DEFAULT NULL,
`line_number` int(8) DEFAULT NULL,
`master_line_number` int(8) DEFAULT NULL,
`gift_wrap_flag_type` char(1) DEFAULT NULL,
`color_code` varchar(4) DEFAULT NULL,
`size_id` varchar(6) DEFAULT NULL,
`width_id` varchar(6) DEFAULT NULL,
`brand` varchar(15) DEFAULT NULL,
`vpn` varchar(30) DEFAULT NULL,
`dept_number` int(8) DEFAULT NULL,
`class_number` int(8) DEFAULT NULL,
`non_merch_item` char(1) DEFAULT NULL,
`created_at` datetime DEFAULT NULL,
`updated_at` datetime DEFAULT NULL,
`modified_by` varchar(32) DEFAULT NULL,
`chain_id` int(11) DEFAULT NULL,
`fulfillment_location` int(11) DEFAULT NULL,
`fulfillment_date` datetime DEFAULT NULL,
`fulfillment_status` int(11) DEFAULT NULL,
`fulfillment_sales_associate` int(11) DEFAULT NULL,
`gift_wrap_line_number` int(11) DEFAULT NULL,
`shipping_type` int(11) DEFAULT NULL,
`order_track_info_id` int(11) DEFAULT NULL,
`store_tlog_updated` varchar(1) DEFAULT NULL,
`shipping_tlx_code` int(11) DEFAULT NULL,
`store_closed` tinyint(1) DEFAULT NULL,
`flags` int(11) DEFAULT NULL,
`deal_based_index` int(11) DEFAULT NULL,
`tlog_calc_ret_price` float DEFAULT NULL,
`tlog_amount` float DEFAULT NULL,
`tlog_retail_price` float DEFAULT NULL,
`tlog_ext_amount` float DEFAULT NULL,
`tlog_flag_1` int(11) DEFAULT NULL,
`tlog_flag_2` int(11) DEFAULT NULL,
`tlog_flag_3` int(11) DEFAULT NULL,
`time_remaining` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `order_line_items_ecash_idx` (`order_number`),
KEY `order_line_item_fulfillment_location_idx` (`fulfillment_location`),
KEY `order_line_item_fulfillment_status_idx` (`fulfillment_status`),
KEY `upc_idx` (`upc`),
KEY `sku_id_idx` (`sku_id`),
KEY `order_line_items_idx001` (`order_number`,`id`,`fulfillment_status`),
KEY `order_track_info_id` (`order_track_info_id`),
KEY `shipping_type_idx` (`shipping_type`,`non_merch_item`) USING BTREE
) ENGINE=InnoDB AUTO_INCREMENT=11367052 DEFAULT CHARSET=latin1
This query can be simplified:
select a.id,
a.order_number,
a.sku_id,
a.fulfillment_status,
a.modified_by,
a.created_at,
a.updated_at
from senddb.order_line_items a
inner join senddb.order_histories b on a.id = b.order_line_item_id
where b.order_status in ('x','y','z')
and b.fulfillment_location = 'abcd'
and a.fulfillment_status = '2';
Since you're only selecting values from table a, you don't need to select specific values from table b and can instead just apply your conditions. Outside of this, you need to ensure that b.order_line_item_id has an index on it. You can find more about creating indexes here. I'm not an expert in MySQL but something similar to this should work if senddb.order_histories.order_line_item_id isn't already the primary key.
CREATE INDEX IX_order_histories_order_line_item_id ON order_histories (order_line_item_id);
You need to read up the optimization section of the MySQL docs. It contains a lot of information on how you can optimize your queries and data sets. The main idea here is to add indexes to the fields that are being used as the criteria in the WHERE clause of the SQL statements.
Basically, both of your alternatives are using a "sub-SELECT, not an INNER JOIN.
The syntax of a true JOIN is one of the following:
SELECT ...
FROM X INNER JOIN Y USING (field_list)
... or ...
SELECT ...
FROM X INNER JOIN Y ON (x.field1 = y.field2) ...
But in both cases the objects being joined are tables or views.
I'm going to presume ... admittedly, without checking ... that Nick Larsen's answer #1 adequately re-expresses your original query using JOINs.
(Notice how, in his answer, the shorthand identifiers A and B are introduced as referring to each of the two table-names mentioned in his query.)
Firstly, you need to decide if a 50 million resultset is what you are asking for. Big data tables are not there so that you can select all their rows. They are there so that you can ask them questions using sql queries. SQL is a query language, it's not a data loading language.
What's your purpose? If you want to copy the data you can do that by loading the data, for example, 1000 rows per query in a for loop. if you are loading the data for processing, you can do that in the same way.
If you want to derive statistical information, you can use outer join and return a low number of rows, using aggregate functions. But you shouldn't do that either, what you "should" do is to decide what you want from the table and preferably, run aggregate functions to store useful information in a different table. (mostly SELECT INTO queries) You should never need to join a table of 50 million records in the first place.
Telling you how to do something wrong using indexes wouldn't be the right thing here.
I have two tables .Property tables and it related photo.One property may have many photo but I want only one any of it related photo, When I use left join MySQL query it become too slow.
Here is my query
SELECT `sp_property`.`id` as propertyid, `sp_property`.`property_name`, `sp_property`.`property_price`, `sp_property`.`adv_type`, `sp_property`.`usd`, `images`.`filepath_name`
FROM (`sp_property`)
LEFT JOIN (select id, Max(property_id) as pid,filepath_name
from sp_property_images
group by property_id) `images`
ON `images`.`pid` = `sp_property`.`id`
WHERE `sp_property`.`published` = 'yes'
GROUP BY `propertyid`
ORDER BY `sp_property`.`feature_listing` desc, `submit_date` desc
LIMIT 1,20
CREATE TABLE IF NOT EXISTS `sp_property_images` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`property_id` varchar(100) NOT NULL,
`filepath_name` text,
`label_name` varchar(45) DEFAULT NULL,
`primary` char(10) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `property_id` (`property_id`),
KEY `id` (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=12941 ;
CREATE TABLE IF NOT EXISTS `sp_property` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`propertytype` varchar(50) NOT NULL,
`adv_type` varchar(45) NOT NULL,
`property_name` text,
`division` varchar(45) NOT NULL,
`township` varchar(45) NOT NULL,
`property_price` decimal(20,2) unsigned DEFAULT NULL,
`price_type` varchar(45) NOT NULL,
`availability` varchar(100) DEFAULT NULL,
`property_address` text,
`p_dimension_length` varchar(45) NOT NULL,
`p_dimension_width` varchar(45) NOT NULL,
`p_dimension_sqft` varchar(45) NOT NULL,
`p_dimension_acre` varchar(45) NOT NULL,
`floor` varchar(45) NOT NULL,
`phone` varchar(100) DEFAULT NULL,
`aircorn` varchar(45) NOT NULL,
`ownership` varchar(45) NOT NULL,
`bedroom` varchar(45) NOT NULL,
`bathroom` varchar(45) NOT NULL,
`special_feature` text,
`amentites` text,
`property_detail` text,
`submit_date` datetime DEFAULT NULL,
`published` varchar(45) NOT NULL,
`published_date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`agent_id` varchar(45) NOT NULL,
`source` varchar(45) NOT NULL,
`contact_name` varchar(100) NOT NULL,
`contact_no` varchar(100) NOT NULL,
`contact_address` text NOT NULL,
`contact_email` varchar(100) NOT NULL,
`unit_type` varchar(100) DEFAULT NULL,
`map_lat` varchar(100) DEFAULT NULL,
`map_long` varchar(100) DEFAULT NULL,
`show_map` varchar(3) DEFAULT 'no',
`total_view` bigint(20) NOT NULL DEFAULT '0',
`feature_listing` varchar(10) NOT NULL DEFAULT 'no',
`new_homes_id` int(11) NOT NULL,
`publish_price` int(1) NOT NULL DEFAULT '0',
`usd` decimal(20,2) NOT NULL,
`tag_id` int(11) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=18524 ;
Have you added indices on your tables? You would need three indices on the following columns:
article_photo.a_id for grouping and joining
article_photo.p_id for sorting
article.a_id for joining (although this is hopefully already the PK of your table)
The result of joins is not guaranteed to be sorted in any order, so you probably want to move your ORDER BY clause from the subquery to the outer query:
SELECT * from `article`
LEFT JOIN (
SELECT * from `article_photo`
GROUP BY `a_id`) as images
ON article.a_id = images.a_id
ORDER BY images.p_id DESC
Also, you have no guarantee on which article_photo you will get, since select data without an aggregate function (and only MySQL will allow you to do that).
Now that the question contains the real tables and all information essential to answering, here's my take – first, here's your query:
SELECT `sp_property`.`id` as propertyid, `sp_property`.`property_name`, `sp_property`.`property_price`, `sp_property`.`adv_type`, `sp_property`.`usd`, `images`.`filepath_name`
FROM (`sp_property`)
LEFT JOIN (select id, Max(property_id) as pid,filepath_name
from sp_property_images
group by property_id) `images`
ON `images`.`pid` = `sp_property`.`id`
WHERE `sp_property`.`published` = 'yes'
GROUP BY `propertyid`
ORDER BY `sp_property`.`feature_listing` desc, `submit_date` desc
LIMIT 1,20
Let's see. you are joining sp_property_images.property_id with sp_property.id. These columns have different types (int vs. varchar) and I suppose this results in a severe performance penalty (since the values have to be converted to the same type).
Then, you are filtering by sp_property.published, so I suggest adding an index on this column as well. Also, examine if you really need to have this column as varchar. A bool/bit flag probably suffices as well (if it doesn't, an enum might be a better choice still).
Ordering benefits from an index too. Add an index spanning both columns sp_property.feature_listing and sp_property.submit_date.
If all of the above still doesn't help, you might have to remove the sub-select. It might prevent the mysql engine from recognizing (and using!) the index you have defined on the sp_property_images.property_id column.
This simple query will give you what you asked for, all articles with their photos
SELECT ar.a_id, ar.a_title, ap.p_id, ap.photo_name
FROM article ar
JOIN article_photo ap on ar.a_id = ap.a_id
No reason for left join and grouping there or you wanna get sum on photos by article?