Mysql Correlated Subquery vs Join performance - mysql

I've always thought that Joins were faster than Subqueries. However for a very simple query in a small dataset the Join is returning in 1.0s whereas the Correlated-Subquery returns in 0.001s. Seems like something is wrong. I note that both queries are using the correct (appallingly named) indexes. Over 1 sec seems excessive for the Join. Any ideas?
Please compare these two queries with their Explain plans:
a) Using a Join
select user.id, user.username,
count(distinct bet_placed.id) as bets_placed,
count(distinct bet_won.id) as bets_won,
count(distinct bets_involved.id) as bets_involved
from user
left join bet as bet_placed on bet_placed.user_placed = user.id
left join bet as bet_won on bet_won.user_won = user.id
left join bet_accepters as bets_involved on bets_involved.user = user.id
group by user.id
Explain plan:
id select_type table type possible_keys key key_len ref rows filtered Extra
1 SIMPLE user index PRIMARY PRIMARY 4 NULL 86 100.00 NULL
1 SIMPLE bet_placed ref fk_bet_user1_idx fk_bet_user1_idx 4 xxx.user.id 6 100.00 "Using index"
1 SIMPLE bet_won ref user_won_idx user_won_idx 5 xxx.user.id 8 100.00 "Using index"
1 SIMPLE bets_involved ref FK_user_idx FK_user_idx 4 xxx.user.id 8 100.00 "Using index"
Average response time: 1.0 secs
b) Using a Correlated-Subquery
select user.id, user.username,
(select COALESCE(count(bet.id), 0) from bet where bet.user_placed = user.id) as bets_placed,
(select COALESCE(count(bet.id), 0) from bet where bet.user_won = user.id) as bets_won,
(select COALESCE(count(bet_accepters.id), 0) from bet_accepters where bet_accepters.user = user.id) as bets_involved
from user;
Explain plan:
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY user ALL NULL NULL NULL NULL 86 100.00 NULL
4 "DEPENDENT SUBQUERY" bet_accepters ref FK_user_idx FK_user_idx 4 xxx.user.id 8 100.00 "Using index"
3 "DEPENDENT SUBQUERY" bet ref user_won_idx user_won_idx 5 xxx.user.id 8 100.00 "Using index"
2 "DEPENDENT SUBQUERY" bet ref fk_bet_user1_idx fk_bet_user1_idx 4 xxx.user.id 6 100.00 "Using index"
Average response time: 0.001 secs

Please see
which shows a comparison in speed/rows for different types of query.
It may be that there is little/no difference (either way) on 'smaller' datasets (but may ary on way the db has been set up, as well as the DBMS used), but as you can see,
However, in relation to other 'query types', these are much faster than other operations (shown below):
Subquery vs. Join
Both the subquery and join solutions perform reasonably well when very
small partitions are involved (up to several hundred rows per
partition). As partition size grows, the performance of these
solutions degrades in a quadratic (N2) manner, becoming quite poor.
But as long as the partitions are small, performance degradation
caused by an increase in number of partitions is linear. One factor
that might affect your choice between using a subquery-based or
join-based solution is the number of aggregates requested. As I
discussed, the subquery-based approach requires a separate scan of the
data for each aggregate, whereas the join-based approach doesn’t—so
you’ll most likely want to use the join approach when you need to
calculate multiple aggregates.
~SOURCE

Related

Why is this Mysql statement slow AND using wrong indices

the SQL at the bottom is super slow ~12-15 seconds. And I don't understand why. Before you read the whole one, just check the first Coalesce part of the first Coalesce. If I replace it with "0", then it is super fast (0.0051s). If I only query the contained Subquery with set-in "client_id", it is super fast, too.
The table "rest_io_log" which is used in the Coalesce contains a lot of entries (more than 5 Million) and therefore got lots of indices to check the contents fast.
The two most important indices for this topic are these:
timestamp - contains only this column
account_id, client_id, timestamp - contains these 3 columns in this order
When I prepend this statement with an "EXPLAIN" it says:
id
select_type
table
partitions
type
possible_keys
key
key_len
ref
rows
filtered
Extra
1
PRIMARY
cl
NULL
range
PRIMARY, index_user_id
index_user_id
485
NULL
2
100.00
Using index condition
1
PRIMARY
rates
NULL
eq_ref
PRIMARY
PRIMARY
4
oauth2.cl.rate
1
100.00
NULL
4
DEPENDENT SUBQUERY
traffic
NULL
ref
unique, unique_account_id_client_id_date, index_date, index_account_id_warning_100_client_id_date
unique
162
const, const, oauth2.cl.client_id
1
100.00
Using index condition
3
DEPENDENT SUBQUERY
traffic
NULL
ref
unique, unique_account_id_client_id_date, index_account_id_warning_100_client_id_date
unique_account_id_client_id_date
158
const, oauth2.cl.client_id
56
100.00
Using where; Using index; Using filesort
2
DEPENDENT SUBQUERY
rest_io_log
NULL
index
index_client_id, index_account_id_client_id_timestamp, index_account_id_timestamp, index_account_id_duration_timestamp, index_account_id_statuscode, index_account_id_client_id_statuscode, index_account_id_rest_path, index_account_id_client_id_rest_path
timestamp
5
NULL
2
5.00
Using where
on the bottem line we can see there are tons of indices available and it chooses "timestamp" which is actually not the best choice because account_id and client_id is available, too.
If I enforce the right index by adding "USE INDEX (index_account_id_client_id_timestamp)" to the subquery the execution time is reduced to 8 seconds and the EXPLAIN looks like this:
id
select_type
table
partitions
type
possible_keys
key
key_len
ref
rows
filtered
Extra
1
PRIMARY
cl
NULL
range
PRIMARY, index_user_id
index_user_id
485
NULL
2
100.00
Using index condition
1
PRIMARY
rates
NULL
eq_ref
PRIMARY
PRIMARY
4
oauth2.cl.rate
1
100.00
NULL
4
DEPENDENT SUBQUERY
traffic
NULL
ref
unique, unique_account_id_client_id_date, index_date...
unique
162
const, const, oauth2.cl.client_id
1
100.00
Using index condition
3
DEPENDENT SUBQUERY
traffic
NULL
ref
unique, unique_account_id_client_id_date, index_acco...
unique_account_id_client_id_date
158
const, oauth2.cl.client_id
56
100.00
Using where; Using index; Using filesort
2
DEPENDENT SUBQUERY
rest_io_log
NULL
ref
index_account_id_client_id_timestamp
index_account_id_client_id_timestamp
157
const, oauth2.cl.client_id
1972
100.00
Using where; Using index; Using filesort
SELECT
cl.timestamp AS active_since,
GREATEST
(
COALESCE
(
(
SELECT
timestamp AS last_request
FROM
rest_io_log USE INDEX (index_account_id_client_id_timestamp)
WHERE
account_id = 12345 AND
client_id = cl.client_id
ORDER BY
timestamp DESC
LIMIT
1
),
"0000-00-00 00:00:00"
),
COALESCE
(
(
SELECT
CONCAT(date, " 00:00:00") AS last_request
FROM
traffic
WHERE
account_id = 12345 AND
client_id = cl.client_id
ORDER BY
date DESC
LIMIT
1
),
"0000-00-00 00:00:00"
)
) AS last_request,
(
SELECT
requests
FROM
traffic
WHERE
account_id = 12345 AND
client_id = cl.client_id AND
date=NOW()
) AS traffic_today,
cl.client_id AS user_account_name,
t.rate_name,
t.rate_traffic,
t.rate_price
FROM
clients AS cl
LEFT JOIN
(
SELECT
id AS rate_id,
name AS rate_name,
daily_max_traffic AS rate_traffic,
price AS rate_price
FROM
rates
) AS t
ON cl.rate=t.rate_id
WHERE
cl.user_id LIKE "12345|%"
AND
cl.client_id LIKE "api_%"
AND
cl.client_id LIKE "%_12345"
;
the response of the total query looks like this:
active_since
last_request
traffic_today
user_account_name
rate_name
rate_traffic
rate_price
2019-01-16 15:40:34
2019-04-23 00:00:00
NULL
api_some_account_12345
Some rate name
1000
0.00
2019-01-16 15:40:34
2022-10-27 00:00:00
NULL
api_some_other_account_12345
Some rate name
1000
0.00
Can you help?
Why is this Mysql statement slow
Fetching the same row multiple times. Use a JOIN instead of repeated subqueries.
Use MAX instead of ORDER BY and LIMIT 1:
SELECT MAX(timestamp)
FROM ...
WHERE a=12345 AND c=...
Don't use USE INDEX -- what helps today may hurt tomorrow.
Do you really need to fetch both date and timestamp?? Don't they mean the same thing? Or does the data entry need to simplify those down to a single column?
CONCAT(date, " 00:00:00") is identical to date. Making that change, let's you combine those first two subqueries.
cl.client_id LIKE "api_%" AND cl.client_id LIKE "%_12345" ==> cl.client_id LIKE 'api%12345'.
Doesn't use LEFT JOIN ( SELECT ... ) ON ... Instead, simply do LEFT JOIN rates ON ...
Suggested indexes:
rest_io_log: INDEX(account_id, client_id, timestamp)
clients: INDEX(user_id, client_id, rate, timestamp)
rates: INDEX(rate_id, rate_name, rate_traffic, rate_price) -- assuming the above change

How to avoid Using temporary; Using filesort on MySql Query

Currently I am facing a rather slow query on a website, which also slows down the server on more traffic. How can I rewrite the query or what index can I write to avoid "Using temporary; Using filesort"? Without "order by" everything works fast, but without the wanted result/order.
SELECT cams.name, models.gender, TIMESTAMPDIFF(YEAR, models.birthdate, CURRENT_DATE) AS age, lcs.viewers
FROM cams
LEFT JOIN cam_tags ON cams.id = cam_tags.cam_id
INNER JOIN tags ON cam_tags.tag_id = tags.id
LEFT JOIN model_cams ON cams.id = model_cams.cam_id
LEFT JOIN models ON model_cams.model_id = models.id
LEFT JOIN latest_cam_stats lcs ON cams.id = lcs.cam_id
WHERE tags.name = '?'
ORDER BY lcs.time_stamp_id DESC, lcs.viewers DESC
LIMIT 24 OFFSET 96;
id
select_type
table
partitions
type
possible_keys
key
key_len
ref
rows
filtered
Extra
1
SIMPLE
tags
NULL
const
PRIMARY,tags_name_uindex
tags_name_uindex
766
const
1
100
Using temporary; Using filesort
1
SIMPLE
cam_tags
NULL
ref
PRIMARY,cam_tags_cams_id_fk,cam_tags_tags_id_fk
cam_tags_tags_id_fk
4
const
75565047
100
Using where
1
SIMPLE
cams
NULL
eq_ref
PRIMARY
PRIMARY
4
cam_tags.cam_id
1
100
NULL
1
SIMPLE
model_cams
NULL
eq_ref
model_platforms_platforms_id_fk
model_platforms_platforms_id_fk
4
cam_tags.cam_id
1
100
NULL
1
SIMPLE
models
NULL
eq_ref
PRIMARY
PRIMARY
4
model_cams.model_id
1
100
NULL
1
SIMPLE
lcs
NULL
eq_ref
PRIMARY,latest_cam_stats_cam_id_time_stamp_id_viewers_index
PRIMARY
4
cam_tags.cam_id
1
100
NULL
There are many cases where it is effectively impossible to avoid "using temporary, using filesort".
"Filesort" does not necessarily involve a "file"; it is often done in RAM. Hence performance may not be noticeably hurt.
That said, I will assume your real question is "How can this query be sped up?".
Most of the tables are accessed via PRIMARY or "eq_ref" -- all good. But the second table involves touching an estimated 75M rows! Often that happens as the first table, not second. Hmmmm.
Sounds like cam_tags is a many-to-many mapping table? And it does not have any index starting with name? See this for proper indexes for such a table: http://mysql.rjweb.org/doc.php/index_cookbook_mysql#many_to_many_mapping_table
Since the WHERE and ORDER BY reference more than one table, it is essentially impossible to avoid "using temporary, using filesort".
Worse than that, it needs to find all the ones with "name='?'", sort the list, skip 96 rows, and only finally deliver 24.

Radically slower subquery without autokey in MySQL 5.7 vs 5.6 - any way to force index?

I have a datehelper table with every YYYY-MM-DD as DATE between the years 2000 and 2100. To this I'm joining a subquery for all unit transactions. unit.end is a DATETIME so my subquery simplifies it to DATE and uses that to join to the datehelper table.
In 5.6 this query takes a couple seconds to run a massive amount of transactions, and it derives a table that is auto keyed based on the DATE(unit.end) in the subquery and uses that to join everything else fairly quickly.
In 5.7, it takes 600+ seconds and I can't get it to derive a table or follow the much better execution plan that 5.6 used. Is there a flag I need to set or some way to prefer the old execution plan?
Here's the query:
EXPLAIN SELECT datehelper.id AS date, MONTH(datehelper.id)-1 AS month, DATE_FORMAT(datehelper.id,'%d')-1 AS day,
IFNULL(SUM(a.total),0) AS total, IFNULL(SUM(a.tax),0) AS tax, IFNULL(SUM(a.notax),0) AS notax
FROM datehelper
LEFT JOIN
(SELECT
DATE(unit.end) AS endDate,
getFinalPrice(unit.id) AS total, tax, getFinalPrice(unit.id)-tax AS notax
FROM unit
INNER JOIN products ON products.id=unit.productID
INNER JOIN prodtypes FORCE INDEX(primary) ON prodtypes.id=products.prodtypeID
WHERE franchiseID='1' AND void=0 AND checkout=1
AND end BETWEEN '2020-01-01' AND DATE_ADD('2020-01-01', INTERVAL 1 YEAR)
AND products.prodtypeID NOT IN (1,10)
) AS a ON a.endDate=datehelper.id
WHERE datehelper.id BETWEEN '2020-01-01' AND '2020-12-31'
GROUP BY datehelper.id ORDER BY datehelper.id;
5.6 result (much faster):
id select_type table type possible_keys key key_len ref rows Extra
1 PRIMARY datehelper range PRIMARY PRIMARY 3 NULL 365 Using where; Using index
1 PRIMARY <derived2> ref <auto_key0> <auto_key0> 4 datehelper.id 10 NULL
2 DERIVED prodtypes index PRIMARY PRIMARY 4 NULL 10 Using where; Using index
2 DERIVED products ref PRIMARY,prodtypeID prodtypeID 4 prodtypes.id
9 Using index
2 DERIVED unit ref productID,end,void,franchiseID productID 9 products.id 2622 Using where
5.7 result (much slower, no auto key found):
id select_type table partitions type possible_keys key key_len ref rows filtered Extra
1 SIMPLE datehelper NULL range PRIMARY PRIMARY 3 NULL 366 100.00 Using where; Using index
1 SIMPLE unit NULL ref productID,end,void,franchiseID franchiseID 4 const 181727 100.00 Using where
1 SIMPLE products NULL eq_ref PRIMARY,prodtypeID PRIMARY 8 barkops3.unit.productID 1 100.00 Using where
1 SIMPLE prodtypes NULL eq_ref PRIMARY PRIMARY 4 barkops3.products.prodtypeID 1 100.00 Using index
I found the problem. It was the optimizer_switch 'derived_merge' flag which is new to 5.7.
https://dev.mysql.com/doc/refman/5.7/en/derived-table-optimization.html
This flag overrides materialization of derived tables if the optimizer thinks the outer WHERE can be pushed down into a subquery. In this case, that optimization was enormously more costly than joining a materialized table on an auto_key.

MySql complicated join performance issue

Please consider the following query
SELECT * FROM PC_SMS_OUTBOUND_MESSAGE AS OM
JOIN MM_TEXTOUT_SERVICE AS TOS ON TOS.TEXTOUT_SERVICE_ID = OM.SERVICE_ID
JOIN PC_SERVICE_NUMBER AS SN ON OM.TO_SERVICE_NUMBER_ID = SN.SERVICE_NUMBER_ID
JOIN PC_SUBSCRIBER AS SUB ON SUB.SERVICE_NUMBER_ID = SN.SERVICE_NUMBER_ID
JOIN MM_CONTACT CON ON CON.SUBSCRIBER_ID = SUB.SUBSCRIBER_ID
--AND CON.MM_CLIENT_ID = 1
AND OM.CLIENT_ID= 1
AND OM.CREATED>='2013-05-08 11:47:53' AND OM.CREATED<='2014-05-08 11:47:53'
ORDER BY OM.SMS_OUTBOUND_MESSAGE_ID DESC LIMIT 50
To get the dataset I require I need to filter on the (commented out) CONTACTS client_id as well as the OUTBOUND_MESSAGES client_id but this is what changes the performance from milliseconds to tens of minutes.
Execution plan without "AND CON.MM_CLIENT_ID = 1":
id select_type table type possible_keys key key_len ref rows filtered Extra
1 SIMPLE OM index FK4E518EAA19F2EA2B,SERVICEID_IDX,CREATED_IDX,CLIENTID_IDX,CL_CR_ST_IDX,CL_CR_STYPE_ST_IDX,SID_TOSN_CL_CREATED_IDX PRIMARY 8 NULL 6741 3732.00 Using where
1 SIMPLE SUB ref PRIMARY,FKA1845E3459A7AEF FKA1845E3459A7AEF 9 mmlive.OM.TO_SERVICE_NUMBER_ID 1 100.00 Using where
1 SIMPLE SN eq_ref PRIMARY PRIMARY 8 mmlive.OM.TO_SERVICE_NUMBER_ID 1 100.00 Using where
1 SIMPLE CON ref FK2BEC061CA525D30,SUB_CL_IDX FK2BEC061CA525D30 8 mmlive.SUB.SUBSCRIBER_ID 1 100.00
1 SIMPLE TOS eq_ref PRIMARY,FKDB3DF298AB3EF4E2 PRIMARY 8 mmlive.OM.SERVICE_ID 1 100.00
Execution plan with "AND CON.MM_CLIENT_ID = 1":
id select_type table type possible_keys key key_len ref rows filtered Extra
1 SIMPLE CON ref FK2BEC061CA525D30,FK2BEC06134399E2A,SUB_CL_IDX FK2BEC06134399E2A 8 const 18306 100.00 Using temporary; Using filesort
1 SIMPLE SUB eq_ref PRIMARY,FKA1845E3459A7AEF PRIMARY 8 mmlive.CON.SUBSCRIBER_ID 1 100.00
1 SIMPLE OM ref FK4E518EAA19F2EA2B,SERVICEID_IDX,CREATED_IDX,CLIENTID_IDX,CL_CR_ST_IDX,CL_CR_STYPE_ST_IDX,SID_TOSN_CL_CREATED_IDX FK4E518EAA19F2EA2B 9 mmlive.SUB.SERVICE_NUMBER_ID 3 100.00 Using where
1 SIMPLE SN eq_ref PRIMARY PRIMARY 8 mmlive.SUB.SERVICE_NUMBER_ID 1 100.00 Using where
1 SIMPLE TOS eq_ref PRIMARY,FKDB3DF298AB3EF4E2 PRIMARY 8 mmlive.OM.SERVICE_ID 1 100.00
Any suggestions on how to format the above to make it a little easier on the eye would be good.
ID fields are primary keys.
There are indexes on all joining columns.
You may be able to fix this problem by using a subquery:
JOIN (SELECT C.* FROM CONTACTS C WHERE C.USER_ID = 1) ON C.SUBSCRIBER_ID = SUB.ID
This will materialize the matching rows, which could have downstream effects on the query plan.
If this doesn't work, then edit your query and add:
The explain plans for both queries.
The indexes available on the table.
EDIT:
Can you try creating a composite index:
PC_SMS_OUTBOUND_MESSAGE(CLIENT_ID, CREATED, SERVICE_ID, TO_ SERVICE_ID, SMS_OUTBOUND_MESSAGE_ID);
This may change both query plans to start on the OM table with the appropriate filtering, hopefully making the results stable and good.
I've solved the riddle! For my case anyway so I'll share.
This all came down to the join order changing once I added that extra clause, which you can clearly see in the execution plan. When the query is fast, the Outbound Messages are at the top of the plan but when slow (after adding the clause), the Contacts table is at the top.
I think this means that the Outbound Messages index can no longer be utilised for the sorting which causes the dreaded;
"Using temporary; Using filesort"
By simply adding STRAIGHT_JOIN keyword directly after the select I could force the execution plan to join in the order denoted directly by the query.
Happy for anyone with a more intimate knowledge of this field to contradict any of the above in terms of what is actually happening but it definitely worked.

Mysql Query taking more time to execute?

SELECT BB.NAME BranchName,VI.NAME Village,COUNT(BAC.CBSACCOUNTNUMBER) "No.Of Accounts",
SUM(BAC.CURRENTBALANCE) SumOfAmount,
SUM(CASE WHEN transactiontype = 'C' THEN amount ELSE 0 END) AS CreditTotal,
SUM(CASE WHEN transactiontype = 'D' THEN amount ELSE 0 END) AS DebitTotal,
SUM(CASE WHEN transactiontype = 'C' THEN amount WHEN transactiontype = 'D' THEN -1 * amount ELSE 0 END) AS CurrentBalance
FROM CUSTOMER CU,APPLICANT AP,ADDRESS AD,VILLAGE VI,BANKBRANCH BB,BANKACCOUNT BAC
LEFT OUTER JOIN accounttransaction ACT ON ACT.BANKACCOUNT_CBSACCOUNTNUMBER=BAC.CBSACCOUNTNUMBER
AND DATE_FORMAT(ACT.TRANDATE,'%Y-%m-%d')<='2013-05-09'
AND DATE_FORMAT(BAC.ACCOUNTOPENINGDATE,'%Y-%m-%d') <'2013-05-09'
AND ACT.BANKACCOUNT_CBSACCOUNTNUMBER IS NOT NULL
WHERE CU.CODE=AP.CUSTOMER_CODE AND BAC.ENTITY='CUSTOMER' AND BAC.ENTITYCODE=CU.CODE
AND AD.ENTITY='APPLICANT' AND AD.ENTITYCODE=AP.CODE
AND AD.VILLAGE_CODE=VI.CODE AND VI.STATE_CODE=AD.STATE_CODE AND VI.DISTRICT_CODE=AD.DISTRICT_CODE
AND VI.BLOCK_CODE=AD.BLOCK_CODE AND VI.PANCHAYAT_CODE=AD.PANCHAYAT_CODE
AND CU.BANKBRANCH_CODE=BB.CODE AND BAC.CBSACCOUNTNUMBER IS NOT NULL AND ACT.TRANSACTIONTYPE IS NOT NULL
GROUP BY BB.NAME,VI.NAME LIMIT 10;
and
below is my explain plan
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE AD index ADDRESS_ENTITYCODE ADDRESS_ENTITYCODE 598 NULL 47234 Using where; Using index; Using temporary; Using filesort
1 SIMPLE VI ref PRIMARY PRIMARY 62 fiserveraupgb.AD.VILLAGE_CODE 1 Using where
1 SIMPLE AP eq_ref PRIMARY,AppCodeIndex PRIMARY 62 fiserveraupgb.AD.ENTITYCODE 1
1 SIMPLE BAC ref BANKACCOUNT_ENTITYCODE BANKACCOUNT_ENTITYCODE 63 fiserveraupgb.AP.CUSTOMER_CODE 1 Using where; Using index
1 SIMPLE CU eq_ref PRIMARY,CustCodeIndex PRIMARY 62 fiserveraupgb.AP.CUSTOMER_CODE 1
1 SIMPLE BB ref PRIMARY,Bankbranch_CodeName PRIMARY 62 fiserveraupgb.CU.BANKBRANCH_CODE 1
1 SIMPLE ACT index NULL accounttransaction_sysidindes 280 NULL 22981 Using where; Using index; Using join buffer
Mysql server version 5.5 and I am using mysql workbench below is my query it is taking 13 min to execute, please suggestion the best method I have created the indexes for all the columns which are involved.
You mainly need indexes on columns that are used in joins and in your where clause. Other indexes don't add value for your select statements and slow down your inserts and updates.
In this case, you're using the column values in functions. Due to this the indexes cannot be used efficiently.
An expression like this is very inefficient:
DATE_FORMAT(ACT.TRANDATE,'%Y-%m-%d')<='2013-05-09'
It causes a lot of string conversions, because all TRANDATES are converted to a string representation of their value. These values need to be temporarily stored and are not indexed, so apart from the conversion, any index on ACT.TRANDATE is no longer used. That is probably causing the rather expensive 'Using join buffer' at the end of your explain plan.
Rather convert the string '2013-05-09' to a date value and use this value as a constant in or parameter for your query.
Another thing to do, is create not separate indexes for separate columns, but one index for a group of columns that is used in a where and/or join. For instance this part:
AD.ENTITY = 'APPLICANT' AND
AD.ENTITYCODE = AP.CODE AND
AD.VILLAGE_CODE = VI.CODE
Having one index on the columns ENTITY, ENTITYCODE, and VILLAGE_CODE together would be more efficient than having a separate index for each of them. And it may help to include the other columns as well.
And last: If a column or combination of columns is guaranteed to be unique, ad a unique index. It is slightly faster in selects.
A general advise: Don't mix old join syntax with ansi joins. It makes your query hard to read.
These hints (apart from the last one) should speed up your query, but it can still be slow, depending on the amount of data, the hardware and the load.