How to optimze this Mysql simple query - mysql

This query is taking 450ms
SELECT `u`.`user_id`, `c`.`company`
FROM `users` AS `u`
LEFT JOIN `companies` AS `c` ON `c`.`user_id` = `u`.`user_id`
WHERE `u`.`user_id` = 'search_term'
OR `u`.`lname` LIKE 'search_term%'
OR `u`.`email` LIKE 'search_term%'
OR `c`.`company` LIKE 'search_termeo%'
tables:
users (260250 rows)
companies (570 rows)
structures:
- users:
CREATE TABLE `users` (
`user_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`region_id` int(10) unsigned NOT NULL,
`fname` varchar(30) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`lname` varchar(30) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`email` varchar(50) COLLATE utf8mb4_unicode_ci NOT NULL,
`password` varchar(100) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`phone` varchar(20) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`active` tinyint(1) NOT NULL DEFAULT '0',
PRIMARY KEY (`user_id`),
KEY `idx_lname` (`lname`),
KEY `idx_email` (`email`),
UNIQUE KEY `unq_region_id_email` (`region_id`, `email`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
- companies:
CREATE TABLE `companies` (
`user_id` int(10) unsigned NOT NULL,
`company` varchar(35) COLLATE utf8mb4_unicode_ci NOT NULL,
`vat_num` varchar(20) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
PRIMARY KEY (`user_id`),
KEY `idx_company` (`company`) USING BTREE,
CONSTRAINT `users_companies_ibfk_1` FOREIGN KEY (`user_id`) REFERENCES `users` (`user_id`) ON DELETE CASCADE ON UPDATE CASCADE,
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
The result of explain query
I think 450ms is too much for such query and such little amount of data
and I want to know if there is somthing to optimize
query run in querious v3 under iMac 2017, 3,4 GHz, 16Go
Mysql: 5.7.26 on MAMP pro v5.7

OR conditions when not on the same field or range based (such as <, >, LIKE) really decrease MySQL's ability to take advantage of indexes; you can restructure queries by breaking them down into separate simpler ones that you can then UNION. Separating it out like this allows MySQL to take advantage of a different index of each query within the UNIONs
SELECT `u`.`user_id`, `c`.`company`
FROM `users` AS `u` LEFT JOIN `companies` AS `c` ON `c`.`user_id` = `u`.`user_id`
WHERE `u`.`user_id` = 'search_term'
UNION DISTINCT
SELECT `u`.`user_id`, `c`.`company`
FROM `users` AS `u` LEFT JOIN `companies` AS `c` ON `c`.`user_id` = `u`.`user_id`
WHERE `u`.`lname` LIKE 'search_term%'
UNION DISTINCT
SELECT `u`.`user_id`, `c`.`company`
FROM `users` AS `u` LEFT JOIN `companies` AS `c` ON `c`.`user_id` = `u`.`user_id`
WHERE `u`.`email` LIKE 'search_term%'
UNION DISTINCT
SELECT `u`.`user_id`, `c`.`company`
FROM `users` AS `u` INNER JOIN `companies` AS `c` ON `c`.`user_id` = `u`.`user_id`
WHERE `c`.`company` LIKE 'search_termeo%'
;
Also, note that I changed the last one's JOIN to an INNER since any condition on the right-hand table of a LEFT JOIN (that isn't "without a match from that table") is basically an INNER JOIN anyway.
UNION DISTINCT is used to prevent records that satisfied multiple conditions from being repeated, however... if companies.company is not unique (i.e. company id 1 called "Blah" and company id 12 also called "Blah") then those will also be merged where they would not be in your original query; if it is a potential issue, that can be remedied by also including company_id in each SELECT.

Related

Slow MySQL with join query written in Laravel/Eloquent for Yajra DataTables

I am trying to pull the data from two tables by joining the data, but the sql query executed is dead slow. The idea is to pull all users and then combine with the created_at date from points table. Pulling all users or all points is rather quick, but having problems with writing a proper join sql. I did try to add indexes to appropriate columns (points.created_at for example), but those made query even slower.
This is the code that generates the query:
return $this->user
->query()
->select(['users.id', 'users.email', 'users.role', 'users.created_at', 'users.updated_at', 'pt.created_at AS last_transaction'])
->leftJoin(DB::raw('(SELECT points.user_id, points.created_at FROM points ORDER BY points.created_at DESC) AS pt'), 'pt.user_id', '=', 'users.id')
->where('users.role', 'consument')
->groupBy('users.id');
Which generates this query:
select `users`.`id`, `users`.`email`, `users`.`role`, `users`.`created_at`, `users`.`updated_at`, `pt`.`created_at` as `last_transaction`
from `users`
left join (SELECT points.user_id, points.created_at FROM points ORDER BY points.created_at DESC) AS pt on `pt`.`user_id` = `users`.`id`
where `users`.`role` = ? and `users`.`deleted_at` is null
group by `users`.`id`
order by `id` asc
Users table:
CREATE TABLE `users` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`email` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
`password` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
`remember_token` varchar(100) COLLATE utf8_unicode_ci DEFAULT NULL,
`role` varchar(15) COLLATE utf8_unicode_ci DEFAULT 'consument',
`created_at` timestamp NOT NULL DEFAULT current_timestamp(),
`updated_at` timestamp NOT NULL DEFAULT current_timestamp(),
`deleted_at` timestamp NULL DEFAULT NULL,
`email_verified_at` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
`email_verify_token` text COLLATE utf8_unicode_ci DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `users_email_unique` (`email`)
) ENGINE=InnoDB AUTO_INCREMENT=84345 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
Points table:
CREATE TABLE `points` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`user_id` int(10) unsigned NOT NULL,
`tablet_id` int(10) unsigned DEFAULT NULL,
`parent_company` int(10) unsigned NOT NULL,
`company_id` int(10) unsigned NOT NULL,
`points` int(10) unsigned NOT NULL,
`mutation_type` tinyint(3) unsigned NOT NULL,
`created_at` timestamp NOT NULL DEFAULT current_timestamp(),
`updated_at` timestamp NOT NULL DEFAULT current_timestamp(),
PRIMARY KEY (`id`),
KEY `points_user_id_foreign` (`user_id`),
KEY `points_company_id_foreign` (`company_id`),
KEY `points_parent_company_index` (`parent_company`),
KEY `points_tablet_id_index` (`tablet_id`),
KEY `points_mutation_type_company_id_created_at_index` (`mutation_type`,`company_id`,`created_at`),
KEY `created_at_user_id` (`created_at`,`user_id`),
CONSTRAINT `points_company_id_foreign` FOREIGN KEY (`company_id`) REFERENCES `companies` (`id`) ON DELETE CASCADE ON UPDATE CASCADE,
CONSTRAINT `points_parent_company_foreign` FOREIGN KEY (`parent_company`) REFERENCES `parent_company` (`id`) ON DELETE CASCADE ON UPDATE CASCADE,
CONSTRAINT `points_tablet_id_foreign` FOREIGN KEY (`tablet_id`) REFERENCES `tablets` (`id`) ON DELETE SET NULL ON UPDATE CASCADE,
CONSTRAINT `points_user_id_foreign` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=1798627 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
Users has 84,263 and points has 1,636,119 rows. If I execute the query manually through phpMyAdmin, it takes about 150 seconds to execute. if I ran it through Laravel, page times out after 180 seconds.
I can add or remove indexes and change the sql query, but I can't change the database structure, so any help with optimized sql query would be greatly appreciated.
If there is is only one row per user in the points table you do the following:
If there is users which does not have a post in points you could use:
select `users`.`id`, `users`.`email`, `users`.`role`, `users`.`created_at`,
`users`.`updated_at`, `pt`.`created_at` as `last_transaction`
from `users`
left join points AS pt on `pt`.`user_id` = `users`.`id`
where `users`.`role` = ? and `users`.`deleted_at` is null
order by `id` ASC
If every user in the user table always has one post in the points table you could skip the left join and just:
select `users`.`id`, `users`.`email`, `users`.`role`, `users`.`created_at`,
`users`.`updated_at`, `pt`.`created_at` as `last_transaction`
from `users`
join points AS pt on `pt`.`user_id` = `users`.`id`
where `users`.`role` = ? and `users`.`deleted_at` is null
order by `id` asc
This will return just one user and the latest point row based on the points column 'created_at'.
SELECT `u`.`id`,
`u`.`email`,
`u`.`role`,
`u`.`created_at`,
`u`.`updated_at`,
`pt`.`created_at` as `last_transaction`
from `users` u
LEFT join points AS pt on `pt`.`user_id` = `u`.`id`
LEFT JOIN (
SELECT user_id, MAX(created_at) AS mm FROM points GROUP BY user_id
) AS m ON m.user_id = pt.user_id
where `u`.`role` = ? and `u`.`deleted_at` is NULL AND m.mm = pt.created_at
order by `id` ASC;

How can i optimize the mysql COUNT() query with many-to-many relationship(through the 3rd table)

Here is the query that i execute:
SELECT COUNT(*)
FROM (SELECT `p`.*
FROM `shop_products` `p` LEFT JOIN
`shop_tag_assignments`
ON `p`.`id` = `shop_tag_assignments`.`product_id` LEFT JOIN
`shop_tags`
ON `shop_tag_assignments`.`tag_id` = `shop_tags`.`id`
WHERE `p`.`status`=1
GROUP BY `p`.`id`
) `c`
This query takes about 300 milliseconds(i think it's too long..)
EXPLAIN QUERY:
EXPLAIN QUERY IMAGE
DB tables dump:
1k records in shop_tags table
CREATE TABLE `shop_tags` (
`id` int(11) NOT NULL,
`name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`slug` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`label` text COLLATE utf8_unicode_ci NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
ALTER TABLE `shop_tags`
ADD PRIMARY KEY (`id`),
ADD UNIQUE KEY `idx-shop_tags-slug` (`slug`),
ADD KEY `id` (`id`);
ALTER TABLE `shop_tags`
MODIFY `id` int(11) NOT NULL AUTO_INCREMENT, AUTO_INCREMENT=1162;
COMMIT;
Table structure for shop_tag_assignments:
224k records in shop_tag_assignments table
CREATE TABLE `shop_tag_assignments` (
`product_id` int(11) NOT NULL,
`tag_id` int(11) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
ALTER TABLE `shop_tag_assignments`
ADD PRIMARY KEY (`product_id`,`tag_id`),
ADD KEY `idx-shop_tag_assignments-product_id` (`product_id`),
ADD KEY `idx-shop_tag_assignments-tag_id` (`tag_id`),
ADD KEY `_index_name` (`product_id`,`tag_id`),
ADD KEY `__index_name` (`tag_id`,`product_id`);
ALTER TABLE `shop_tag_assignments`
ADD CONSTRAINT `fk-shop_tag_assignments-product_id` FOREIGN KEY (`product_id`) REFERENCES `shop_products` (`id`) ON DELETE CASCADE,
ADD CONSTRAINT `fk-shop_tag_assignments-tag_id` FOREIGN KEY (`tag_id`) REFERENCES `shop_tags` (`id`) ON DELETE CASCADE;
COMMIT;
Mysql version:
5.7.16-10-log
Get rid of the subquery. And don't use backticks so much:
SELECT COUNT(DISTINCT p.id)
FROM shop_products p LEFT JOIN
shop_tag_assignments sta
ON p.id = sta.product_id LEFT JOIN
shop_tags st
ON sta.tag_id = st.id
WHERE p.status = 1;
For this query, you want indexes on shop_products(status, id).
Next, your query is counting the number of rows returned by the inner subquery after aggregation by p.id. You are using LEFT JOIN, so nothing is being filtered out. That means that the joins are really redundant. I think this does the same thing:
SELECT COUNT(*)
FROM shop_products p
WHERE p.status = 1;
This should be much faster than your version . . . however, your version is pretty fast so you might not notice a really big difference.

How to order by calculated column in mysql

It is very slow to order by calculated column. Like this one:
select
`users`.`id`,
`username`,
`about_me`,
(
select
count(*)
from
`interests`
inner join `users_interests` on `interests`.`id` = `users_interests`.`interest_id`
where
`users`.`id` = `users_interests`.`user_id`
and `interest_id` in (1, 2, 4) --this is ids of interests which authenticated user has chosen
) as `interests_count`
from
`users`
order by
`interests_count` desc
So in this query i am ordering users By interests which is best suiting authenticated user. And it is very slow in large data tables. Any ideas what will be alternative and much faster solution. Thanks in advance
EDIT
Users Table
CREATE TABLE `users` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`roles_id` bigint(20) unsigned NOT NULL DEFAULT 2,
`username` varchar(100) COLLATE utf8mb4_unicode_ci NOT NULL,
`about_me` varchar(70) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`email` varchar(100) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`status` tinyint(1) NOT NULL DEFAULT 0,
`email_verified_at` timestamp NULL DEFAULT NULL,
`password` varchar(191) COLLATE utf8mb4_unicode_ci NOT NULL,
`created_at` timestamp NULL DEFAULT NULL,
`updated_at` timestamp NULL DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `users_username_unique` (`username`),
UNIQUE KEY `users_email_unique` (`email`),
KEY `users_roles_id_foreign` (`roles_id`),
CONSTRAINT `users_roles_id_foreign` FOREIGN KEY (`roles_id`) REFERENCES
`user_roles` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=50102 DEFAULT CHARSET=utf8mb4
COLLATE=utf8mb4_unicode_ci
Interests Table
CREATE TABLE `interests` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(191) COLLATE utf8mb4_unicode_ci NOT NULL,
`interest_category_id` bigint(20) unsigned NOT NULL,
`created_at` timestamp NULL DEFAULT NULL,
`updated_at` timestamp NULL DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `interests_interest_category_id_foreign` (`interest_category_id`),
CONSTRAINT `interests_interest_category_id_foreign` FOREIGN KEY
(`interest_category_id`) REFERENCES `interests_categories` (`id`) ON DELETE
CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=61 DEFAULT CHARSET=utf8mb4
COLLATE=utf8mb4_unicode_ci
Pivot table between Users and Interests
CREATE TABLE `users_interests` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`user_id` bigint(20) unsigned NOT NULL,
`interest_id` bigint(20) unsigned NOT NULL,
PRIMARY KEY (`id`),
KEY `users_interests_user_id_foreign` (`user_id`),
KEY `users_interests_interest_id_foreign` (`interest_id`),
CONSTRAINT `users_interests_interest_id_foreign` FOREIGN KEY (`interest_id`) REFERENCES `interests` (`id`) ON DELETE CASCADE,
CONSTRAINT `users_interests_user_id_foreign` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=251552 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
Without test data it's difficult to know if this would be faster than your current query, but you might find the use of a Common Table Expression (a.k.a. WITH clause) to be helpful here:
with cteCount AS (select ui.`user_id`,
count(*) as `interests_count`
from `interests` i
inner join `users_interests` ui
on i.`id` = ui.`interest_id`
where ui.`interest_id` in (1, 2, 4))
select u.`id`,
`username`,
`about_me`,
cc.`interests_count`
from `users` u
inner join cteCount cc
on cc.`user_id` = u.`id`
order by
cc.`interests_count` desc
The performance of this query will be affected by the presence or absence of indexes on the appropriate columns. Just glancing at it I'd say you should have the following indexes available:
interests
Index 1: id
users_interests
Index 1: interest_id
users
Index 1: id
Try to directly join and aggregate.
SELECT u.id,
u.username,
u.about_me,
count(ui.interest_id) interests_count
FROM users u
LEFT JOIN users_interests ui
ON ui.user_id = u.id
AND ui.interest_id IN (1, 2, 4)
GROUP BY u.id,
u.username,
u.about_me;
Additionally create an index supporting the aggregation.
CREATE INDEX users_id_username_about_me
ON users
(id,
username,
about_me);

MySQL Query with Subqueries taking longer than it should

I've been trying to find the cause for the slowdown in the query. The query is originally a DELETE query, but I've been using a SELECT * from
This is the query in question
SELECT * FROM table1
where table1.id IN (
#Per friends suggestion I wrapped the subquery in a subquery (yo dawg) to "cache" it, it works on other queries, but not on this time.
SELECT id FROM (
(
SELECT id FROM (
SELECT table1.id FROM table1
LEFT JOIN table2 ON table2.id = table1.salesperson_id
LEFT JOIN table3 ON table3.id = table2.user_id
LEFT JOIN table4 ON table3.office_id = table4.id
WHERE table1.type = "Snapshot"
AND table4.id = 25 OR table4.parent_id =25
LIMIT 500
) AS ids )
) AS moreIds
)
The table in question is 16 gigs.
The server it's being ran against is beefy enough not to be a bottleneck.
Fields id,salesperson_id and type are all indexed.Checked it 5 times.
The subquery itself runs extremely fast. Subquery:
SELECT id FROM (
SELECT table1.id FROM table1
LEFT JOIN table2 ON table2.id = table1.salesperson_id
LEFT JOIN table3 ON table3.id = table2.user_id
LEFT JOIN table4 ON table3.office_id = table4.id
WHERE table1.type = "Snapshot"
AND table4.id = 25 OR table4.parent_id =25
LIMIT 500
)
In the processlist the query is stuck in the state of "SENDING DATA". But Workbench indicates that the query is still running.
Here's an EXPLAIN SELECT of the query
'1', 'PRIMARY', 'table1', 'index', NULL, 'SALES_FK_ON_SALES_STATE', '5', NULL, '36688459', 'Using where; Using index'
'2', 'DEPENDENT SUBQUERY', '<derived3>', 'ALL', NULL, NULL, NULL, NULL, '500', 'Using where'
'3', 'DERIVED', '<derived4>', 'ALL', NULL, NULL, NULL, NULL, '500', ''
'4', 'DERIVED', 'table4', 'index_merge', 'PRIMARY,IDX_9F61CEFC727ACA70', 'PRIMARY,IDX_9F61CEFC727ACA70', '4,5', NULL, '67', 'Using union(PRIMARY,IDX_9F61CEFC727ACA70); Using where; Using index'
'4', 'DERIVED', 'table3', 'ref', 'PRIMARY,IDX_C077730FFFA0C224', 'IDX_C077730FFFA0C224', '5', 'hugeDb.table4.id', '381', 'Using where; Using index'
'4', 'DERIVED', 'table2', 'ref', 'PRIMARY,UNIQ_36E3BDB1A76ED395', 'UNIQ_36E3BDB1A76ED395', '5', 'hugeDb.table3.id', '1', 'Using where; Using index'
'4', 'DERIVED', 'table1', 'ref', 'SALESPERSON,SALES_FK_ON_SALES_STATE', 'SALES_FK_ON_SALES_STATE', '5', 'hugeDb.table2.id', '115', 'Using where'
Here are the SHOW CREATE TABLES
CREATE TABLE `table4` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`logo_file_id` int(11) DEFAULT NULL,
`contact_address_id` int(11) DEFAULT NULL,
`billing_address_id` int(11) DEFAULT NULL,
`parent_id` int(11) DEFAULT NULL,
`name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`url` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`fax` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`contact_name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`active` tinyint(1) NOT NULL,
`date_modified` datetime DEFAULT NULL,
`date_created` datetime NOT NULL,
`license_number` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`list_name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`email` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`routing_address_id` int(11) DEFAULT NULL,
`billed_separately` tinyint(1) DEFAULT '0',
PRIMARY KEY (`id`),
UNIQUE KEY `UNIQ_9F61CEFCA7E1931C` (`logo_file_id`),
KEY `IDX_9F61CEFC320EF6E2` (`contact_address_id`),
KEY `IDX_9F61CEFC79D0C0E4` (`billing_address_id`),
KEY `IDX_9F61CEFC727ACA70` (`parent_id`),
KEY `IDX_9F61CEFC40F0487C` (`routing_address_id`),
-- CONSTRAINT `FK_9F61CEFC320EF6E2` FOREIGN KEY (`contact_address_id`) REFERENCES `other_irrelevant_table` (`id`),
-- CONSTRAINT `FK_9F61CEFC79D0C0E4` FOREIGN KEY (`billing_address_id`) REFERENCES `other_irrelevant_table` (`id`),
-- CONSTRAINT `FK_9F61CEFCA7E1931C` FOREIGN KEY (`logo_file_id`) REFERENCES `other_irrelevant_table` (`id`),
-- CONSTRAINT `FK_9F61CEFCE346079F` FOREIGN KEY (`routing_address_id`) REFERENCES `other_irrelevant_table` (`id`),
CONSTRAINT `FK_9F61CEFC727ACA70` FOREIGN KEY (`parent_id`) REFERENCES `table4` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=750 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
CREATE TABLE `table3` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`office_id` int(11) DEFAULT NULL,
`user_id` int(11) DEFAULT NULL,
`active` tinyint(1) NOT NULL,
`date_modified` datetime DEFAULT NULL,
`date_created` datetime NOT NULL,
`profile_id` int(11) DEFAULT NULL,
`deleted` tinyint(1) NOT NULL,
PRIMARY KEY (`id`),
KEY `IDX_C077730FFFA0C224` (`office_id`),
KEY `IDX_C077730FA76ED395` (`user_id`),
KEY `IDX_C077730FCCFA12B8` (`profile_id`),
-- CONSTRAINT `FK_C077730FA76ED395` FOREIGN KEY (`user_id`) REFERENCES `other_irrelevant_table` (`id`),
-- CONSTRAINT `FK_C077730FCCFA12B8` FOREIGN KEY (`profile_id`) REFERENCES `other_irrelevant_table` (`id`),
CONSTRAINT `FK_C077730FFFA0C224` FOREIGN KEY (`office_id`) REFERENCES `table4` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=382425 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
CREATE TABLE `table2` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`user_id` int(11) DEFAULT NULL,
`active` tinyint(1) NOT NULL,
`date_modified` datetime DEFAULT NULL,
`date_created` datetime NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `UNIQ_36E3BDB1A76ED395` (`user_id`),
CONSTRAINT `FK_36E3BDB1A76ED395` FOREIGN KEY (`user_id`) REFERENCES `table3` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=174049 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
CREATE TABLE `table1` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`salesperson_id` int(11) DEFAULT NULL,
`count_active_contracts` int(11) NOT NULL,
`average_initial_price` decimal(12,2) NOT NULL,
`average_contract_value` decimal(12,2) NOT NULL,
`total_sold` int(11) NOT NULL,
`total_active` int(11) NOT NULL,
`active` tinyint(1) NOT NULL,
`date_modified` datetime DEFAULT NULL,
`date_created` datetime NOT NULL,
`type` varchar(255) CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL,
`services_scheduled_today` int(11) NOT NULL,
`services_scheduled_week` int(11) NOT NULL,
`services_scheduled_month` int(11) NOT NULL,
`services_scheduled_summer` int(11) NOT NULL,
`serviced_today` int(11) NOT NULL,
`serviced_this_week` int(11) NOT NULL,
`serviced_this_month` int(11) NOT NULL,
`serviced_this_summer` int(11) NOT NULL,
`autopay_account_percentage` decimal(3,2) NOT NULL,
`value_per_door` decimal(12,2) NOT NULL,
`total_paid` int(11) NOT NULL,
`sales_status_summary` varchar(255) CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL,
`total_serviced` int(11) NOT NULL,
`services_scheduled_year` int(11) NOT NULL,
`serviced_this_year` int(11) NOT NULL,
`services_scheduled_yesterday` int(11) NOT NULL,
`serviced_yesterday` int(11) NOT NULL,
PRIMARY KEY (`id`),
KEY `SALESPERSON` (`type`),
KEY `SALES_FK_ON_SALES_STATE` (`salesperson_id`),
CONSTRAINT `SALES_FK_ON_SALES_STATE` FOREIGN KEY (`salesperson_id`) REFERENCES `table2` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=181662521 DEFAULT CHARSET=utf8;
When you see "DEPENDENT SUBQUERY" in the explain, it isn't caching the result of the subquery. It's re-executing the subquery many times (once for each distinct value in the outermost query). I see in the explain that your outermost query is examining 36 million rows. So this is probably running the subquery many, many times.
This is documented here: https://dev.mysql.com/doc/refman/5.7/en/explain-output.html
For DEPENDENT SUBQUERY, the subquery is re-evaluated only once for each set of different values of the variables from its outer context. For UNCACHEABLE SUBQUERY, the subquery is re-evaluated for each row of the outer context.
One way to avoid this is to use a subquery as a derived table instead of as the argument to an IN() predicate. This is a better way to do a semi-join like you're doing.
SELECT ... FROM TableA
WHERE TableA.id IN (SELECT id FROM ...)
Should be equivalent to:
SELECT ... FROM TableA
JOIN (SELECT DISTINCT id FROM ...) AS TableB
ON TableA.id = TableB.id
The use of DISTINCT in the subquery means there's only one row per id returned by the subquery, so the join won't multiply the number of rows from TableA if there are multiple matches. This makes it a semi-join.
The following should do better:
SELECT table1.*
FROM table1
JOIN (
SELECT table1.id FROM table1
LEFT JOIN table2 ON table2.id = table1.salesperson_id
LEFT JOIN table3 ON table3.id = table2.user_id
LEFT JOIN table4 ON table3.office_id = table4.id
WHERE table1.type = 'Snapshot'
AND table4.id = 25 OR table4.parent_id =25
LIMIT 500
) AS ids ON table1.id = ids.id;
You might also try to get rid of the index_merge. You're getting that because you're using OR for two different indexed columns in table4. It uses both indexes, and then unions them. Sometimes† it's better to use a UNION of two subqueries explicitly, instead of relying on the index_merge.
SELECT table1.*
FROM table1
JOIN (
SELECT table1.id FROM table1
JOIN table2 ON table2.id = table1.salesperson_id
JOIN table3 ON table3.id = table2.user_id
JOIN (
SELECT id FROM table4 WHERE id=25
UNION
SELECT id FROM table4 WHERE parent_id=25
) AS t4 ON table3.office_id = t4.id
WHERE table1.type = 'Snapshot'
LIMIT 500
) AS ids ON table1.id = ids.id;
You're also using LEFT JOIN unnecessarily, so I replaced it with JOIN. The MySQL optimizer will silently convert it to an inner join, but I think you should study what LEFT JOIN means, and use it when it's called for.
† I say "sometimes" because which method is best might depend on your data, so you should test it both ways.
Due to me needing to limit a delete query with joins(which isn't possible in mysql), there is an another option. Which is in no way the better one (Can't beat Bill's answer).
But it works, and the query is extremely fast, albeit, not very flexible. Because it has a minimum amount of rows it can pull, which for a 38M row table is 575k (no idea why)
But here it is:
SELECT COUNT(*) FROM table1
JOIN table2 ON table2.id = table1.salesperson_id
JOIN table3 ON table3.id = table2.user_id
JOIN table4 ON table3.office_id = table4.id
WHERE table1.type = "Snapshot"
AND table4.id = 113 OR table4.parent_id =113
AND RAND()<=0.001;
But Bill's answer should be more than enough for everyone.
P.S. I'll ask the question about RAND() in a Where Clause and will post the link here. Maybe it will help some desperate dev in 2025.
You got carried away with nesting, etc.
SELECT table1.*
FROM
(
SELECT table1.id
FROM table1
JOIN table2 ON table2.id = table1.salesperson_id
JOIN table3 ON table3.id = table2.user_id
JOIN table4 ON table3.office_id = table4.id
WHERE table1.type = "Snapshot"
AND table4.id = 25
OR table4.parent_id =25
LIMIT 500
) AS ids
JOIN table1 USING(id)
Some discussion:
It is better to find the 500 ids and throw them into a tmp table than to haul around all the columns of table1.*. Hence the subquery with LIMIT 500.
Bill's UNION seems to be unnecessary since the Optimizer decided to use "index merge union". This may be only the second time I have seen that feature in use!
IN ( SELECT ... ) is probably never faster than an equivalent JOIN or EXISTS, whichever is appropriate. (JOIN is appropriate for your case.)
For table4, you have a perfectly good 'natural PK in logo_file_id, why not get rid of id and promote that to PK? (Similarly in table2.)
Aarrgghh... By doing my previous suggestion, you can bypass table2!
table1 has 181M rows? INT is always 4 bytes. You have a lot of columns that sound like small counters; consider using TINYINT UNSIGNED (1 byte; range: 0..255) or SMALLINT UNSIGNED. That should shrink the size of the table significantly, thereby speeding up cacheability and use of the table somewhat.

trouble in find child field from primary field in mysql

I have two table like below
CREATE TABLE IF NOT EXISTS `countries` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(255) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=196 ;
ANd ANother one
CREATE TABLE IF NOT EXISTS `students` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`admission_no` varchar(255) DEFAULT NULL,
`nationality_id` int(11) DEFAULT NULL,
`country_id` int(11) DEFAULT NULL,
`is_active` tinyint(1) DEFAULT '1',
`is_deleted` tinyint(1) DEFAULT '0',
`created_at` datetime DEFAULT NULL,
`updated_at` datetime DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `admission_no` (`admission_no`)
) ENGINE=InnoDB DEFAULT CHARSET=latin
1 AUTO_INCREMENT=2 ;
So the problem is i want fetch both nationality_id,country_id name from countries table for this im have to use LEFT JOIN query so in this case i am facing problem as im getting same name for both if nationality_id,country_id are different as i can only join on one table only so could someone plz help me to solve this.
If I understand you correctly, you can achieve this by LEFT JOINING the same table twice, using aliases.
Something like
SELECT *
FROM students s LEF TJOIN
countries c ON s.country_id = c.id LEFT JOIN
countries n ON s.nationality_id = n.id
#astander there is a little bug in your query (second alias for countries n is not used in on statement). here is a correct statement.
select s.Id, cNationality.Name, cCountry.Name
from Students as s
left outer join Countries as cNationality on cNationality.Id = s.Nationality_id
left outer join Countries as cCountry on cCountry.Id = s.Country_id