I have the following indexed view:
ALTER View [CIC].[vwCoaterC473Heat] WITH SCHEMABINDING
AS
Select
id as ID,
DATEADD(ms, -DATEPART(ms, read_time), read_time) as ReadTime,
equipment_id as EquipmentID,
...
...
From dbo.td_coater_c473_heat
Where read_time >= Convert(dateTime,'1/1/2012',120)
CREATE UNIQUE CLUSTERED INDEX [IX_vwCoaterC473Heat_ReadTime_EquipmentID_ID]
ON [CIC].[vwCoaterC473Heat]
(
[ReadTime] ASC,
[EquipmentID] ASC,
[ID] ASC
)
GO
And I have the following query that references the indexed view:
Select
r.Coater,
r.ReadTime,
C473_left_A_actual_Temp,
C473_right_A_actual_Temp,
C473_left_B_actual_Temp,
C473_right_B_actual_Temp,
HD02A_Actual_Voltage,
HD02A_Actual_Current,
HD02B_Actual_Voltage,
HD02B_Actual_Current
From Cic.RandyTemp r
Inner Join Cic.vwCoaterC473Heat a
On a.EquipmentId = r.Coater And a.ReadTime = r.ReadTime
The query plan generated from this looks as follows:
I'm curious why SQL Server is bypassing the persisted data from the indexed view, and querying the underlying table the view is based on.
Are you using SQL Server Standard Edition? If so you need to use the WITH (NOEXPAND) hint. Please try this version and see if the indexed view is used. It might just be that the optimizer has decided that accessing the index in the base table is more efficient.
SELECT
r.Coater,
r.ReadTime,
C473_left_A_actual_Temp, -- why no alias prefixes from here down?
C473_right_A_actual_Temp,
C473_left_B_actual_Temp,
C473_right_B_actual_Temp,
HD02A_Actual_Voltage,
HD02A_Actual_Current,
HD02B_Actual_Voltage,
HD02B_Actual_Current
FROM Cic.RandyTemp AS r
INNER JOIN Cic.vwCoaterC473Heat AS a WITH (NOEXPAND)
ON a.EquipmentId = r.Coater
AND a.ReadTime = r.ReadTime;
Related
We have two tables one is properties and another one is property meta when we are getting data from one table "properties" , query only take less then one second in execution but when we are use join to get the data using bellow query from both tables its taking more then 5 second to fetch the data although we have only 12000 record in the tables , i think there is an issue in the sql query any help or suggestion will be appreciated.
SELECT
u.id,
u.property_title,
u.description,
u.city,
u.area,
u.address,
u.slug,
u.latitude,
u.longitude,
u.sync_time,
u.add_date,
u.is_featured,
u.pre_construction,
u.move_in_date,
u.property_status,
u.sale_price,
u.mls_number,
u.bedrooms,
u.bathrooms,
u.kitchens,
u.sub_area,
u.property_type,
u.main_image,
u.area_size as land_area,
pm7.meta_value as company_name,
pm8.meta_value as virtual_tour,
u.year_built,
u.garages
FROM
tbl_properties u
LEFT JOIN tbl_property_meta pm7
ON u.id = pm7.property_id
LEFT JOIN tbl_property_meta pm8
ON u.id = pm8.property_id
WHERE
u.status = 1
AND (pm7.meta_key = 'company_name')
AND (pm8.meta_key = 'virtual_tour')
AND (
(
( u.city = 'Delta'
OR u.post_code LIKE '%Delta%'
OR u.sub_area LIKE '%Delta%'
OR u.state LIKE '%Delta%')
AND country = 'Canada'
)
OR (
( u.city = 'Metro Vancouver Regional District'
OR u.post_code LIKE '%Metro Vancouver Regional District%'
OR u.sub_area LIKE '%Metro Vancouver Regional District%'
OR u.state LIKE '%Metro Vancouver Regional District%' )
AND country = 'Canada'
)
)
AND
u.pre_construction ='0'
GROUP BY
u.id
ORDER BY
u.is_featured DESC,
u.add_date DESC
Try adding this compound index:
ALTER TABLE tbl_property_meta ADD INDEX id_key (property_id, meta_key);
If it doesn't help make things faster, try this one.
ALTER TABLE tbl_property_meta ADD INDEX key_id (meta_key, property_id);
And, you should know that column LIKE '%somevalue' (with a leading %) is a notorious performance antipattern, resistant to optimization via indexes. (There's a way to create indexes for that shape of filter in PostgreSQL, but not in MariaDB / MySQL.)
Add another column with the meta stuff; throw city, post_code, sub_area, and state and probably some other things into it. Then build a FULLTEXT index on that column. Then use MATCH(..) AGAINST("Delta Metro Vancouver Regional District") in the WHERE clause _instead of the LEFT JOINs (which are actually INNER JOINs) and the really messy part of the WHERE clause.
Also, the GROUP BY is probably unnecessary, thereby eliminating extra sort on the intermediate set of rows.
I'm thinking about creating an hashed index in (1) because it uses equalities and bit map on (2) because the state can only be 'accepted' or 'not accepted'. What else can i use? And also my problem is that i can only try b-tree indexes on mysql oracle..
(1)select R.user_id from rent as R
inner join supervise S on
R.adress = S.adress
and R.space_id = S.space_id
group by R.user_id
having count(distinct S.supervisor_id) = 1
(2) select distinct P.adress, P.code from space as P where (P.adress, P.code) not in (
select P.adress, P.code from space as P
natural join rent as R
natural join state as E where E.state = ‘accepted’)
Since there is no directly limiting criterias in query #1, it will likely be done using a merge join, and no index will improve that.
For query #2, how selective is the criteria E.state = 'accepted'? If very selective (< 5-15% of query result), then index on E.state, indexes for the joins from E to R and from R to P, and index on P.adress, P.code.
Composite index on each table:
INDEX(space_id, adress)
Don't use WHERE(a,b) IN ... -- it performs very poorly.
Don't use IN ( SELECT ... ) -- it often performs poorly.
Instead, use a JOIN.
For state, have
INDEX(state)
(or is it already the PRIMARY KEY?)
If you need more help after all that, provide SHOW CREATE TABLE and EXPLAIN SELECT ....
I have the following MYSQL query.
SELECT
COUNT(analyzer_host.server) AS count,
analyzer_host.server AS server
FROM
analyzer_host,
analyzer_url,
analyzer_code
WHERE
analyzer_host.server IS NOT NULL
AND analyzer_host.server != ''
AND analyzer_code.account_id = 33
AND analyzer_code.id = analyzer_url.url_id
AND analyzer_url.id = analyzer_host.url_id
GROUP BY analyzer_host.server;
I did some profiling on this query and this is stuck in "Copying to tmp table" . Is there a way I can avoid that. Also any pointers in what is causing the query to create tmp tables.
First
SELECT COUNT(host.server) AS count, host.server AS server
FROM host
JOIN url ON url.id = host.url_id
JOIN code ON code.id = url.url_id
WHERE host.server IS NOT NULL
AND host.server != ''
AND code.account_id = 33
GROUP BY host.server;
That gets rid of analyzer_ clutter and use JOIN...ON syntax.
Second, it seems that the JOINs are not quite right -- is there both an id and a url_id in url? Is the url_id different between host and url?
Does code have PRIMARY KEY(account_id)? That is where the optimizer would like to start.
Please provide EXPLAIN SELECT ... so we can see if it is doing any table scans. If it is, then that is the problem, not the "tmp table".
Please provide SHOW CREATE TABLE for all three tables if you need further discussion.
I am needing some SQL help. I have a SELECT statement that references several tables and is hanging up in the MySQL database. I would like to know if there is a better way to write this statement so that it runs efficiently and does not hang up the DB? Any help/direction would be appreciated. Thanks.
Here is the code:
Select Max(b.BurID) As BurID
From My.AppTable a,
My.AddressTable c,
My.BurTable b
Where a.AppID = c.AppID
And c.AppID = b.AppID
And (a.Forename = 'Bugs'
And a.Surname = 'Bunny'
And a.DOB = '1936-01-16'
And c.PostcodeAnywhereBuildingNumber = '999'
And c.PostcodeAnywherePostcode = 'SK99 9Q9'
And c.isPrimary = 1
And b.ErrorInd <> 1
And DateDiff(CurDate(), a.ApplicationDate) <= 30)
There is NO mysql error in the log. Sorry.
Pro tip: use explicit JOINs rather than a comma-separated list of tables. It's easier to see the logic you're using to JOIN that way. Rewriting your query to do that gives us this.
select Max(b.BurID) As BurID
From My.AppTable AS a
JOIN My.AddressTable AS c ON a.AppID = c.AppID
JOIN My.BurTable AS b ON c.AppID = b.AppID
WHERE (a.Forename = 'Bugs'
And a.Surname = 'Bunny'
And a.DOB = '1936-01-16'
And c.PostcodeAnywhereBuildingNumber = '999'
And c.PostcodeAnywherePostcode = 'SK99 9Q9'
And c.isPrimary = 1
And b.ErrorInd <> 1
And DateDiff(CurDate(), a.ApplicationDate) <= 30)
Next pro tip: Don't use functions (like DateDiff()) in WHERE clauses, because they defeat using indexes to search. That means you should change the last line of your query to
AND a.ApplicationDate >= CurDate() - INTERVAL 30 DAY
This has the same logic as in your query, but it leaves a naked (and therefore index-searchable) column name in the search expression.
Next, we need to look at your columns to see how you are searching, and cook up appropriate indexes.
Let's start with AppTable. You're screening by specific values of Forename, Surname, and DOB. You're screening by a range of ApplicationDate values. Finally you need AppID to manage your join. So, this compound index should help. Its columns are in the correct order to use a range scan to satisfy your query, and contains the needed results.
CREATE INDEX search1 USING BTREE
ON AppTable
(Forename, Surname, DOB, ApplicationDate, AppID)
Next, we can look at your AddressTable. Similar logic applies. You'll enter this table via the JOINed AppID, and then screen by specific values of three columns. So, try this index
CREATE INDEX search2 USING BTREE
ON AddressTable
(AppID, PostcodeAnywherePostcode, PostcodeAnywhereBuildingNumber, isPrimary)
Finally, we're on to your BurTable. Use similar logic as the other two, and try this index.
CREATE INDEX search3 USING BTREE
ON BurTable
(AppID, ErrorInd, BurID)
This kind of index is called a compound covering index, and can vastly speed up the sort of summary query you have asked about.
I'm in over my head with a big mysql query (mysql 5.0), and i'm hoping somebody here can help.
Earlier I asked how to get distinct values from a joined query
mysql count only for distinct values in joined query
The response I got worked (using a subquery with join as)
select *
from media m
inner join
( select uid
from users_tbl
limit 0,30) map
on map.uid = m.uid
inner join users_tbl u
on u.uid = m.uid
unfortunately, my query has grown more unruly, and though I have it running, joining into a derived table is taking too long because there is no indexes available to the derived query.
my query now looks like this
SELECT mdate.bid, mdate.fid, mdate.date, mdate.time, mdate.title, mdate.name,
mdate.address, mdate.rank, mdate.city, mdate.state, mdate.lat, mdate.`long`,
ext.link,
ext.source, ext.pre, meta, mdate.img
FROM ext
RIGHT OUTER JOIN (
SELECT media.bid,
media.date, media.time, media.title, users.name, users.img, users.rank, media.address,
media.city, media.state, media.lat, media.`long`,
GROUP_CONCAT(tags.tagname SEPARATOR ' | ') AS meta
FROM media
JOIN users ON media.bid = users.bid
LEFT JOIN tags ON users.bid=tags.bid
WHERE `long` BETWEEN -122.52224684058 AND -121.79760915942
AND lat BETWEEN 37.07500915942 AND 37.79964684058
AND date = '2009-02-23'
GROUP BY media.bid, media.date
ORDER BY media.date, users.rank DESC
LIMIT 0, 30
) mdate ON (mdate.bid = ext.bid AND mdate.date = ext.date)
phew!
SO, as you can see, if I understand my problem correctly, i have two derivative tables without indexes (and i don't deny that I may have screwed up the Join statements somehow, but I kept messing with different types, is this ended up giving me the result I wanted).
What's the best way to create a query similar to this which will allow me to take advantage of the indexes?
Dare I say, I actually have one more table to add into the mix at a later date.
Currently, my query is taking .8 seconds to complete, but I'm sure if I could take advantage of the indexes, this could be significantly faster.
First, check for indices on ext(bid, date), users(bid) and tags(bid), you should really have them.
It seems, though, that it's LONG and LAT that cause you most problems. You should try keeping your LONG and LAT as a (coordinate POINT), create a SPATIAL INDEX on this column and query like that:
WHERE MBRContains(#MySquare, coordinate)
If you can't change your schema for some reason, you can try creating additional indices that include date as a first field:
CREATE INDEX ix_date_long ON media (date, `long`)
CREATE INDEX ix_date_lat ON media (date, lat)
These indices will be more efficient for you query, as you use exact search on date combined with a ranged search on axes.
Starting fresh:
Question - why are you grouping by both media.bid and media.date? Can a bid have records for more than one date?
Here's a simpler version to try:
SELECT
mdate.bid,
mdate.fid,
mdate.date,
mdate.time,
mdate.title,
mdate.name,
mdate.address,
mdate.rank,
mdate.city,
mdate.state,
mdate.lat,
mdate.`long`,
ext.link,
ext.source,
ext.pre,
meta,
mdate.img,
( SELECT GROUP_CONCAT(tags.tagname SEPARATOR ' | ')
FROM tags
WHERE ext.bid = tags.bid
ORDER BY tags.bid GROUP BY tags.bid
) AS meta
FROM
ext
LEFT JOIN
media ON ext.bid = media.bid AND ext.date = media.date
JOIN
users ON ext.bid = users.bid
WHERE
`long` BETWEEN -122.52224684058 AND -121.79760915942
AND lat BETWEEN 37.07500915942 AND 37.79964684058
AND ext.date = '2009-02-23'
AND users.userid IN
(
SELECT userid FROM users ORDER BY rank DESC LIMIT 30
)
ORDER BY
media.date,
users.rank DESC
LIMIT 0, 30
You might want to compare your perforamnces against using a temp table for each selection, and joining those tables together.
create table #whatever
create table #whatever2
insert into #whatever select...
insert into #whatever2 select...
select from #whatever join #whatever 2
....
drop table #whatever
drop table #whatever2
If your system has enough memory to hold full tables this might work out much faster. It depends on how big your database is.