I have this particular query which takes long time to execute, other queries on the same tables execute very fast. Querycache is enabled in mysql but still the below query takes more than 80 seconds everytime and CPU crosses 100% utilization.
I cannot modify the query because it is generated by Drupal. Is there anything else I can do to improve performance?
The query is:
select count(*)
from (
SELECT slk.key_id AS key_id
FROM slk slk
LEFT JOIN users users ON slk.uid = users.uid
LEFT JOIN node node_users ON users.uid = node_users.uid
AND node_users.type = 'profile'
) count_alias;
Below is the profile information:
+--------------------------------+-----------+
| Status | Duration |
+--------------------------------+-----------+
| starting | 0.000029 |
| checking query cache for query | 0.000093 |
| Opening tables | 0.000210 |
| System lock | 0.000007 |
| Table lock | 0.000075 |
| optimizing | 0.000008 |
| statistics | 0.000113 |
| preparing | 0.000027 |
| executing | 0.000004 |
| Sending data | 66.086903 |
| init | 0.000027 |
| optimizing | 0.000009 |
| executing | 0.000018 |
| end | 0.000003 |
| query end | 0.000004 |
| freeing items | 0.000049 |
| storing result in query cache | 0.000116 |
| removing tmp table | 0.033162 |
| closing tables | 0.000106 |
| logging slow query | 0.000003 |
| logging slow query | 0.000085 |
| cleaning up | 0.000007 |
+--------------------------------+-----------+
explain on the query gives:
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
| 1 | PRIMARY | NULL | NULL | NULL | NULL | NULL | NULL | NULL | Select tables optimized away |
| 2 | DERIVED | slk | ALL | NULL | NULL | NULL | NULL | 55862 | |
| 2 | DERIVED | users | eq_ref | PRIMARY | PRIMARY | 4 | gscom.slk.uid | 1 | Using index |
| 2 | DERIVED | node_users | ref | node_type,uid,idx_ctp | uid | 4 | gscom.users.uid | 3 | |
idx_ctp is an index on (uid, type).
The query cache is working and below are the stats.
show variables like '%query_cache%';:
| Variable_name | Value |
| have_query_cache | YES |
| query_cache_limit | 2097152 |
| query_cache_min_res_unit | 4096 |
| query_cache_size | 52428800 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
mysql> show status like '%Qcache%';:
| Variable_name | Value |
| Qcache_free_blocks | 1255 |
| Qcache_free_memory | 22902848 |
| Qcache_hits | 1484908 |
| Qcache_inserts | 1036344 |
| Qcache_lowmem_prunes | 95086 |
| Qcache_not_cached | 3975 |
| Qcache_queries_in_cache | 14271 |
| Qcache_total_blocks | 30117 |
You need indexes on:
table slk: (uid)
table node_users: (type, uid)
The query can be rewritten without subquery, as:
SELECT COUNT(*)
FROM slk
LEFT JOIN users
ON slk.uid = users.uid
LEFT JOIN node node_users
ON users.uid = node_users.uid
AND node_users.type = 'profile'
And I'm really not sure why you use LEFT JOIN. You can probably use INNER JOIN and have the same result. Or just use the simple:
SELECT COUNT(*)
FROM slk
This is a poor query. It selects all 55862 rows from the slk table and joins all 55862 rows to two other tables.
JOINs on large result sets are performance killers because MySQL must, at best, perform a seek for each row in the master table to the corresponding rows in the detail table. If there are too many rows, MySQL will decide it's just faster to scan the entire detail table rather than perform so many seeks.
Creating a multi-column index on node_users: (uid, type), as ypercube suggested, will help the second join (to the node_users) table.
Ideally, if this query were using INNER JOINs instead of LEFT OUTER JOINs, we could optimize the query by allowing MySQL to traverse it backwards, starting with the AND node_users.type = 'profile' and giving it the index that ypercube suggested, in the order he suggested. However, since they are left joins, MySQL will still want to get all rows in the slk table, and will start from there.
The only additional thing you can do to improve the performance of this query without modifying it is to avoid hitting the table data by using covering indexes.
This will use a lot more memory, but hopefully, it will be faster because it can read all the values from the indexes (in memory) rather than hitting the disk. This implies that you have enough RAM to support having all of the indexes in memory and you've configured MySQL to use it.
You already have a covering index on users (see Using index in the EXPLAIN result). You want all three lines of the DERIVED query to say Using index in the Extra column.
Create the additional following covering index:
slk: (key_id, uid)
This one was already mentioned above, but I'm including it here again so you don't forget it:
node_users: (uid, type)
You won't get breakthrough performance here because you're still having to do all of the JOINs, but you will get some improvement. Let us know how much faster it is. I'm guessing about twice as fast.
Related
I just updated my MySQL version to 5.7. A SELECT query that has four INNER-JOINS and that previously took around 3 seconds to execute is now taking so long that I can't even keep track of it. A bit of profiling shows that the 'Send Data' part is taking too long. Can someone tell me what is going wrong? Here's some data. Note that the query is still running at this point in time:
+----------------------+-----------+
| Status | Duration |
+----------------------+-----------+
| starting | 0.001911 |
| checking permissions | 0.000013 |
| checking permissions | 0.000003 |
| checking permissions | 0.000003 |
| checking permissions | 0.000006 |
| Opening tables | 0.000030 |
| init | 0.000406 |
| System lock | 0.000018 |
| optimizing | 0.000019 |
| statistics | 0.000509 |
| preparing | 0.000052 |
| executing | 0.000004 |
| Sending data | 31.881794 |
| end | 0.000021 |
| query end | 0.003540 |
| closing tables | 0.000032 |
| freeing items | 0.000214 |
| cleaning up | 0.000028 |
+----------------------+-----------+
Here's the output of EXPLAIN:
+----+-------------+--------------------+------------+------+---------------+------------+---------+-------+---------+----------+----------------------------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+--------------------+------------+------+---------------+------------+---------+-------+---------+----------+----------------------------------------------------+
| 1 | SIMPLE | movie_data_primary | NULL | ref | cinestopId | cinestopId | 26 | const | 1 | 100.00 | NULL |
| 1 | SIMPLE | mg | NULL | ALL | NULL | NULL | NULL | NULL | 387498 | 10.00 | Using where; Using join buffer (Block Nested Loop) |
| 1 | SIMPLE | crw | NULL | ALL | NULL | NULL | NULL | NULL | 1383452 | 10.00 | Using where; Using join buffer (Block Nested Loop) |
| 1 | SIMPLE | cst | NULL | ALL | NULL | NULL | NULL | NULL | 2184556 | 10.00 | Using where; Using join buffer (Block Nested Loop) |
+----+-------------+--------------------+------------+------+---------------+------------+---------+-------+---------+----------+----------------------------------------------------+
Looks like indexing problem when you upgrade the msssql version-
Documentation says-
If you perform a binary upgrade without dumping and reloading tables,
you cannot upgrade directly from MySQL 4.1 to 5.1 or higher. This
occurs due to an incompatible change in the MyISAM table index
formatin MySQL 5.0. Upgrade from MySQL 4.1 to 5.0 and repair all
MyISAM tables. Then upgrade from MySQL 5.0 to 5.1 and check and
repair your tables.Modifications to the handling of character sets or
collations might change the character sort order, which causes the
ordering of entries in any index that uses an affected character
set or collation to be incorrect. Such changes result in several
possible problems: Comparison results that differ from previous
results
Inability to find some index values due to misordered index entries
Misordered ORDER BY results
Tables that CHECK TABLE reports as being in need of repair
Check for the links-
1)checking-table-incompatibilities
2)check-table
3)rebuilding-tables
I'm trying to understand why there is such a big performance difference between the two. This is the query that I run on both with no changes...
SELECT fs.person1, fs.person2, ls.artist
FROM friends AS fs
LEFT JOIN likes AS ls
ON fs.person2 = ls.person
WHERE NOT EXISTS
(select * from likes where fs.person1 = person and ls.artist = artist)
Both have the same data. It's one thing if it took 2-3 times as long but from 10 seconds to over 30 mins...it's perplexing.
Data in each table...
likes = 3 INT columns and 750,000 rows
friends = 2 INT columns and 150,000 rows
I guessed at your table definition and tested with EXPLAIN to see how the optimizer would treat it.
By the way, when asking for query optimization help, always run SHOW CREATE TABLE and include the output, so we can see the table definition, your indexes, data types, constraints. Also run EXPLAIN for the query and show that.
Here's what I get for the query when I use EXPLAIN:
+----+--------------------+-------+------------+------+---------------+------+---------+------+------+----------+----------------------------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+--------------------+-------+------------+------+---------------+------+---------+------+------+----------+----------------------------------------------------+
| 1 | PRIMARY | fs | NULL | ALL | NULL | NULL | NULL | NULL | 1 | 100.00 | NULL |
| 1 | PRIMARY | ls | NULL | ALL | NULL | NULL | NULL | NULL | 1 | 100.00 | Using where; Using join buffer (Block Nested Loop) |
| 2 | DEPENDENT SUBQUERY | likes | NULL | ALL | NULL | NULL | NULL | NULL | 1 | 100.00 | Using where |
+----+--------------------+-------+------------+------+---------------+------+---------+------+------+----------+----------------------------------------------------+
A couple of red flags appear in that EXPLAIN report.
First, that fact that you have no indexes makes all three table references do table-scans (type: ALL). Since MySQL only does nested-loop joins, this means you query will have to do 150,000 x 750,000 x 750,000 row reads. No wonder it takes 30 minutes.
Second is the note about "using join buffer (Block Nested loop)" which is saying it has to evaluate the join in batches because there's no index in which to do more targeted lookups.
Create an index:
ALTER TABLE likes ADD INDEX (person, artist);
Then the EXPLAIN looks better:
+----+--------------------+-------+------------+------+---------------+--------+---------+--------------------------------+------+----------+--------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+--------------------+-------+------------+------+---------------+--------+---------+--------------------------------+------+----------+--------------------------+
| 1 | PRIMARY | fs | NULL | ALL | NULL | NULL | NULL | NULL | 1 | 100.00 | NULL |
| 1 | PRIMARY | ls | NULL | ref | person | person | 5 | test.fs.person2 | 1 | 100.00 | Using where; Using index |
| 2 | DEPENDENT SUBQUERY | likes | NULL | ref | person | person | 10 | test.fs.person1,test.ls.artist | 1 | 100.00 | Using index |
+----+--------------------+-------+------------+------+---------------+--------+---------+--------------------------------+------+----------+--------------------------+
This eliminates two of the table-scans and the use of the join buffer.
But it still leaves another red flag: the DEPENDENT SUBQUERY. In general, MySQL runs dependent subqueries inefficiently, executing them once for each row of the outer query. So you're going to be executing the subquery thousands of times, even with the index lookup to help.
I use LEFT OUTER JOIN to implement anti-joins in MySQL. There's a thorough explanation here: https://explainextended.com/2009/09/18/not-in-vs-not-exists-vs-left-join-is-null-mysql/
SELECT fs.person1, fs.person2, ls1.artist
FROM friends AS fs
JOIN likes AS ls1
ON fs.person2 = ls1.person
LEFT OUTER JOIN likes AS ls2
ON fs.person1 = ls2.person AND ls1.artist = ls2.artist
WHERE ls2.person IS NULL;
Here's the EXPLAIN:
+----+-------------+-------+------------+------+---------------+--------+---------+---------------------------------+------+----------+--------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-------+------------+------+---------------+--------+---------+---------------------------------+------+----------+--------------------------+
| 1 | SIMPLE | fs | NULL | ALL | NULL | NULL | NULL | NULL | 1 | 100.00 | Using where |
| 1 | SIMPLE | ls1 | NULL | ref | person | person | 5 | test.fs.person2 | 1 | 100.00 | Using index |
| 1 | SIMPLE | ls2 | NULL | ref | person | person | 10 | test.fs.person1,test.ls1.artist | 1 | 100.00 | Using where; Using index |
+----+-------------+-------+------------+------+---------------+--------+---------+---------------------------------+------+----------+--------------------------+
No more subquery at all, and the anti-join is resolved using a simple join with indexed lookups.
This should run much faster, assuming your indexes fit in the memory allocated for the buffer pool.
And any action takes way too much time - like alter table primary key or creating index.
This makes me think you have not done any configuration of MySQL with respect to memory allocation. You probably have the default buffer pool size (128MB). This is something you should set relative to the available memory on your system. See https://www.percona.com/blog/2015/06/02/80-ram-tune-innodb_buffer_pool_size/
You may also like to read https://www.percona.com/blog/2016/10/12/mysql-5-7-performance-tuning-immediately-after-installation/
From what I've read, Microsoft SQL Server automatically resizes its buffer pool and other memory from time to time, so it's not necessary to tune it manually.
Learning to tune configuration options is necessary on MySQL. They choose default tuning settings to help ensure MySQL can run on modest servers, because it wouldn't be very friendly for it to allocate 100GB of your server RAM by default, if you don't have that much physical memory, because it would cause swapping or crashing.
There has been some talk of making MySQL tune itself dynamically, but it's a very complex task. Maybe you don't want MySQL to use all the memory available on your system, because you run other processes too. It's hard to guess at the right automatic tuning values for everyone's server, and doing so might encourage people to avoid learning how to allocate and monitor their own system resources.
Is there anyway to get better performance out of this.
select * from p_all where sec='0P00009S33' order by date desc
Query took 0.1578 sec.
Table structure is shown below. There are more than 100 Millions records in this table.
+------------------+---------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+------------------+---------------+------+-----+---------+-------+
| sec | varchar(10) | NO | PRI | NULL | |
| date | date | NO | PRI | NULL | |
| open | decimal(13,3) | NO | | NULL | |
| high | decimal(13,3) | NO | | NULL | |
| low | decimal(13,3) | NO | | NULL | |
| close | decimal(13,3) | NO | | NULL | |
| volume | decimal(13,3) | NO | | NULL | |
| unadjusted_close | decimal(13,3) | NO | | NULL | |
+------------------+---------------+------+-----+---------+-------+
EXPLAIN result
+----+-------------+-----------+------+---------------+---------+---------+-------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-----------+------+---------------+---------+---------+-------+------+-------------+
| 1 | SIMPLE | price_all | ref | PRIMARY | PRIMARY | 12 | const | 1731 | Using where |
+----+-------------+-----------+------+---------------+---------+---------+-------+------+-------------+
How can i speed up this query?
In your example, you do a SELECT *, but you only have an INDEX that contains the columns sec and date.
In result, MySQLs execution plan roughly looks like the following:
Find all rows that have sec = 0P00009S33 in the INDEX. This is fast.
Sort all returned rows by date. This is also possibly fast, depending on the size of your MySQL buffer. Here is possibly room for improvement by optimizing the sort_buffer_size.
Fetch all columns (= full row) for each returned row from the previous INDEX query. This is slow! see (1)
You can optimize this drastically by reducing the SELECTed fields to the minimum. Example: If you only need the open price, do only a SELECT sec, date, open instead of SELECT *.
When you identified the minimum columns you need to query, add a combined INDEX that contains exactly these colums (all columns involved - in the WHERE, SELECT or ORDER BY clause)
This way you can completely skip the slow part of this query, (3) in my example above. When the INDEX already contains all necessary columns, MySQLs optimizer can avoid looking up the full columns and serve your query directly from the INDEX.
Disclaimer: I'm unsure in which order MySQL executes the steps, possibly i ordered (2) and (3) the wrong way round. But this is not important to answer this question, though.
In the web page that I'm working on I need to show some statistics based on a different user details which are in three tables. So I have the following query that I join to more different tables:
SELECT *
FROM `user` `u`
LEFT JOIN `subscriptions` `s` ON `u`.`user_id` = `s`.`user_id`
LEFT JOIN `devices` `ud` ON `u`.`user_id` = `ud`.`user_id`
GROUP BY `u`.`user_id`
When I execute the query with LIMIT 1000 it takes about 0.05 seconds and since I'm using the data from all the three tables in a lot of queries I've decided to put it inside a VIEW:
CREATE VIEW `user_details` AS ( the same query from above )
And now when I run:
SELECT * FROM user_details LIMIT 1000
it takes about 7-10 seconds.
So my question is can I do something to optimize the view because the query seems to be pretty quick or I should the whole query instead of the view ?
Edit: this is what EXPLAIN SELECT * FROM user_details returns
+----+-------------+------------+--------+----------------+----------------+---------+------------------------+--------+-------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+------------+--------+----------------+----------------+---------+------------------------+--------+-------+
| 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 322666 | |
| 2 | DERIVED | u | index | NULL | PRIMARY | 4 | NULL | 372587 | |
| 2 | DERIVED | s | eq_ref | PRIMARY | PRIMARY | 4 | db_users.u.user_id | 1 | |
| 2 | DERIVED | ud | ref | device_id_name | device_id_name | 4 | db_users.u.user_id | 1 | |
+----+-------------+------------+--------+----------------+----------------+---------+------------------------+--------+-------+
4 rows in set (8.67 sec)
this is what explain retuns for the query:
+----+-------------+-------+--------+----------------+----------------+---------+------------------------+--------+-------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+--------+----------------+----------------+---------+------------------------+--------+-------+
| 1 | SIMPLE | u | index | NULL | PRIMARY | 4 | NULL | 372587 | |
| 1 | SIMPLE | s | eq_ref | PRIMARY | PRIMARY | 4 | db_users.u.user_id | 1 | |
| 1 | SIMPLE | ud | ref | device_id_name | device_id_name | 4 | db_users.u.user_id | 1 | |
+----+-------------+-------+--------+----------------+----------------+---------+------------------------+--------+-------+
3 rows in set (0.00 sec)
Views and joins are extremely bad if it comes to performance. This is more or less true for all relational database management systems. Sounds strange, since that is what those systems are designed for, but it is true nevertheless.
Try to avoid the joins if this is a query in heavy usage on your page: instead create a real table (not a view) that is filled from the three tables. you can automate that process using triggers. So each time an entry is inserted into one of the original tables the triggers takes care that the data is propagated to the physical user_details table.
This strategy certainly means a one time investment for the setup, but you definitely will get a much better performance.
The query I have is for a table of inventory. What the subquery join does is gets the total number of work orders there are for each inventory asset. If I run the base query with the main joins for equipment type, vendor, location and room, it runs just fine. Less than a second to return a result. using it with the subquery join, it takes 15 to 20 seconds to return a result.
Here is the full query:
SELECT `inventory`.inventory_id AS 'inventory_id',
`inventory`.media_tag AS 'media_tag',
`inventory`.asset_tag AS 'asset_tag',
`inventory`.idea_tag AS 'idea_tag',
`equipTypes`.equipment_type AS 'equipment_type',
`inventory`.equip_make AS 'equip_make',
`inventory`.equip_model AS 'equip_model',
`inventory`.equip_serial AS 'equip_serial',
`inventory`.sales_order AS 'sales_order',
`vendors`.vendor_name AS 'vendor_name',
`inventory`.purchase_order AS 'purchase_order',
`status`.status AS 'status',
`locations`.location_name AS 'location_name',
`rooms`.room_number AS 'room_number',
`inventory`.notes AS 'notes',
`inventory`.send_to AS 'send_to',
`inventory`.one_to_one AS 'one_to_one',
`enteredBy`.user_name AS 'user_name',
from_unixtime(`inventory`.enter_date, '%m/%d/%Y') AS 'enter_date',
from_unixtime(`inventory`.modified_date, '%m/%d/%Y') AS 'modified_date',
COALESCE(at.assets,0) AS assets
FROM mod_inventory_data AS `inventory`
LEFT JOIN mod_inventory_equip_types AS `equipTypes`
ON `equipTypes`.equip_type_id = `inventory`.equip_type_id
LEFT JOIN mod_vendors_main AS `vendors`
ON `vendors`.vendor_id = `inventory`.vendor_id
LEFT JOIN mod_inventory_status AS `status`
ON `status`.status_id = `inventory`.status_id
LEFT JOIN mod_locations_data AS `locations`
ON `locations`.location_id = `inventory`.location_id
LEFT JOIN mod_locations_rooms AS `rooms`
ON `rooms`.room_id = `inventory`.room_id
LEFT JOIN mod_users_data AS `enteredBy`
ON `enteredBy`.user_id = `inventory`.entered_by
LEFT JOIN
( SELECT asset_tag, count(*) AS assets
FROM mod_workorder_data
WHERE asset_tag IS NOT NULL
GROUP BY asset_tag ) AS at
ON at.asset_tag = inventory.asset_tag
ORDER BY inventory_id ASC LIMIT 0,20
The MySQL EXPLAIN data for this is here
+----+-------------+--------------------+--------+---------------+-----------+---------+-------------------------------------+-------+---------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------------------+--------+---------------+-----------+---------+-------------------------------------+-------+---------------------------------+
| 1 | PRIMARY | inventory | ALL | NULL | NULL | NULL | NULL | 12612 | Using temporary; Using filesort |
| 1 | PRIMARY | equipTypes | eq_ref | PRIMARY | PRIMARY | 4 | spsd_woidbs.inventory.equip_type_id | 1 | |
| 1 | PRIMARY | vendors | eq_ref | PRIMARY | PRIMARY | 4 | spsd_woidbs.inventory.vendor_id | 1 | |
| 1 | PRIMARY | status | eq_ref | PRIMARY | PRIMARY | 4 | spsd_woidbs.inventory.status_id | 1 | |
| 1 | PRIMARY | locations | eq_ref | PRIMARY | PRIMARY | 4 | spsd_woidbs.inventory.location_id | 1 | |
| 1 | PRIMARY | rooms | eq_ref | PRIMARY | PRIMARY | 4 | spsd_woidbs.inventory.room_id | 1 | |
| 1 | PRIMARY | enteredBy | eq_ref | PRIMARY | PRIMARY | 4 | spsd_woidbs.inventory.entered_by | 1 | |
| 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 4480 | |
| 2 | DERIVED | mod_workorder_data | range | asset_tag | asset_tag | 13 | NULL | 15897 | Using where; Using index |
+----+-------------+--------------------+--------+---------------+-----------+---------+-------------------------------------+-------+---------------------------------+
Using MySql query profiling I get this:
+--------------------------------+------------+
| Status | Time |
+--------------------------------+------------+
| starting | 0.000020 |
| checking query cache for query | 0.000263 |
| Opening tables | 0.000034 |
| System lock | 0.000013 |
| Table lock | 0.000079 |
| optimizing | 0.000011 |
| statistics | 0.000138 |
| preparing | 0.000019 |
| executing | 0.000010 |
| Sorting result | 0.000004 |
| Sending data | 0.015103 |
| init | 0.000094 |
| optimizing | 0.000009 |
| statistics | 0.000049 |
| preparing | 0.000022 |
| Creating tmp table | 0.000104 |
| executing | 0.000009 |
| Copying to tmp table | 15.410168 |
| Sorting result | 0.009488 |
| Sending data | 0.000215 |
| end | 0.000006 |
| removing tmp table | 0.001997 |
| end | 0.000018 |
| query end | 0.000005 |
| freeing items | 0.000112 |
| storing result in query cache | 0.000011 |
| removing tmp table | 0.000022 |
| closing tables | 0.000036 |
| logging slow query | 0.000005 |
| logging slow query | 0.000005 |
| cleaning up | 0.000013 |
+--------------------------------+------------+
which shows me that the bottle neck is copying to temp table, but I am unsure of how to speed this up. Are there settings on the server end that I can configure to make this faster? Are there changes to the existing query that I can do that will yield the same results that would be faster?
It seems to me that the LEFT JOIN subquery would give the same resulting data matrix every time, so if it has to run that query for every row in the inventory list, I can see why it would be slow. Or does MySQL cache the subquery when it runs? I thought I read somwhere that MySQL does not cache subqueries, is this true?
Any help is appreciated.
Here is what I did which seems to be working good. I created a table called mod_workorder_counts. The table has two fields, Asset tag which is unique, and wo_count which is and INT(3) field. I am populating that table with this query:
INSERT INTO mod_workorder_counts ( asset_tag, wo_count )
select s.asset_tag, ct
FROM
( SELECT t.asset_tag, count(*) as ct
FROM mod_workorder_data t
WHERE t.asset_tag IS NOT NULL
GROUP BY t.asset_tag
) as s
ON DUPLICATE KEY UPDATE mod_workorder_counts.wo_count = ct
which executed in 0.1580 seconds which may be considered slightly slow, but not bad.
Now when I run this modification of my original query:
SELECT `inventory`.inventory_id AS 'inventory_id',
`inventory`.media_tag AS 'media_tag',
`inventory`.asset_tag AS 'asset_tag',
`inventory`.idea_tag AS 'idea_tag',
`equipTypes`.equipment_type AS 'equipment_type',
`inventory`.equip_make AS 'equip_make',
`inventory`.equip_model AS 'equip_model',
`inventory`.equip_serial AS 'equip_serial',
`inventory`.sales_order AS 'sales_order',
`vendors`.vendor_name AS 'vendor_name',
`inventory`.purchase_order AS 'purchase_order',
`status`.status AS 'status',
`locations`.location_name AS 'location_name',
`rooms`.room_number AS 'room_number',
`inventory`.notes AS 'notes',
`inventory`.send_to AS 'send_to',
`inventory`.one_to_one AS 'one_to_one',
`enteredBy`.user_name AS 'user_name',
from_unixtime(`inventory`.enter_date, '%m/%d/%Y') AS 'enter_date',
from_unixtime(`inventory`.modified_date, '%m/%d/%Y') AS 'modified_date',
COALESCE(at.wo_count, 0) AS workorders
FROM mod_inventory_data AS `inventory`
LEFT JOIN mod_inventory_equip_types AS `equipTypes`
ON `equipTypes`.equip_type_id = `inventory`.equip_type_id
LEFT JOIN mod_vendors_main AS `vendors`
ON `vendors`.vendor_id = `inventory`.vendor_id
LEFT JOIN mod_inventory_status AS `status`
ON `status`.status_id = `inventory`.status_id
LEFT JOIN mod_locations_data AS `locations`
ON `locations`.location_id = `inventory`.location_id
LEFT JOIN mod_locations_rooms AS `rooms`
ON `rooms`.room_id = `inventory`.room_id
LEFT JOIN mod_users_data AS `enteredBy`
ON `enteredBy`.user_id = `inventory`.entered_by
LEFT JOIN mod_workorder_counts AS at
ON at.asset_tag = inventory.asset_tag
ORDER BY inventory_id ASC LIMIT 0,20
It executes in 0.0051 seconds. That puts a total between the two queries at 0.1631 seconds which is near 1/10th of a second versus 15+ seconds with the original subquery.
If I just included the field "wo_count" without using the COALESCE, I got NULL values for any asset tags that were not listed in the "mod_workorder_counts" table. So the COALESCE would give me a 0 for any NULL value, which is what I want.
Now I will set it up so that when a work order is entered for an asset tag, i'll have the INSERT/UPDATE query for the counts table update at that time so it doesn't run unnecessarily.