I have a performane problem. My manager said me to tune a select statement.
We are having a table
SELECT [AcctDetailReportId]
,[WorkOrderEneteredDate]
,[LocationName]
,[LocationNumber]
,[District]
,[CostCenter]
,[GLCode]
,[WorkType]
,[RequestType]
,[RequestCode]
,[ServiceLocation]
,[Cause]
,[Remedy]
,[RequestDescription]
,[CreatedBy]
,[Priority]
,[WorkOrderNumber]
,[Status]
,[DNE]
,[InvoiceNumber]
,[VendorCode]
,[VendorName]
,[Quote1]
,[Quote2]
,[Invoiceid]
,[InvoiceSubmittedDate]
,[WorkComplete]
,[TotalLaborCost]
,[TotalMaterialCost]
,[SalesTax]
,[InvoiceTotal]
,[WarrantyExpirationDate]
,[UnderWarranty]
,[MallName]
--,[AddressID]
--,[CommunicationID]
--,[ContactID]
--,[StateID]
--,[CountryID]
--,[LanguageID]
--,[AddressTypeID]
,[Line1]
,[Line2]
,[City]
,[Province]
,[Region]
,[ZipPostalCode]
--,[DeactivateDateTime]
--,[DeactivateUser]
,[CreateDateTime]
,[CreateUser]
--,[PreviousRecordID]
,[LocationState]
,[CheckNumber]
,[CheckDate]
FROM [Darden].[dbo].[RPT_AccountDetailReport]
GO"
which contains of 29000 records. it takes about 2 mins to retrieve data using Clustered Index Scan..
Table has only one Clustered Index.
Requirement is to get all records in a table and all columns.. But in reduced time..
Can any one help me on that...
Thanks,
Karthik
Have you reorganized/rebuilt your index(s) the script below will scripts to re/create all the indexes for a table where i.name = '%what ever you put here%'. There are a bunch of lines below that are remmed out that I have used in the past.
SELECT
--stats.object_id AS objectid,
--QUOTENAME(s.name) AS schemaname,
--stats.index_id AS indexid,
i.name AS index_name,
--stats.partition_number AS partitionnum,
stats.avg_fragmentation_in_percent AS frag,
stats.page_count,
QUOTENAME(o.name) AS objectname,
CASE
when stats.avg_fragmentation_in_percent < 30 then 'Reorganize'
when stats.avg_fragmentation_in_percent > 30 then 'Rebuild'
END AS 'action_to_take',
CASE
when stats.avg_fragmentation_in_percent < 30 then 'ALTER INDEX '+i.name+ ' ON ' +DB_NAME()+'.'+QUOTENAME(s.name)+'.'+QUOTENAME(o.name)+' REORGANIZE;'
when stats.avg_fragmentation_in_percent > 30 then 'ALTER INDEX '+i.name+ ' ON ' +DB_NAME()+'.'+QUOTENAME(s.name)+'.'+QUOTENAME(o.name)+' REBUILD;'
END AS 'Statement'
FROM
sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL , NULL, NULL) as stats,
sys.objects AS o,
sys.schemas AS s,
sys.indexes AS i
WHERE o.object_id = stats.object_id
AND s.schema_id = o.schema_id
AND i.object_id = stats.object_id
AND i.index_id = stats.index_id
AND i.name is not null and i.name not like '%missing index%'
AND stats.avg_fragmentation_in_percent >= 10.0
--AND stats.page_count >= 5000
--AND stats.index_id > 0
--and i.name like '%880%'
ORDER BY action_to_take,stats.avg_fragmentation_in_percent desc,stats.page_count desc
Or you can click on Display [estimated] Execution plan & SMS will generate any needed indexes.
HTH,
LarryR....
Related
We have a scenario where users answer some questions related to a parent entity that we'll call a widget. Each question has both a numeric and word answer. Multiple users answer each question for a given widget.
We then display a row for each widget with the average numeric answer for each question. We do that using a MySQL pseudo-pivot with dynamic columns as detailed here So we end up with something like:
SELECT widget_id, ...
ROUND(IFNULL(AVG(CASE
WHEN LOWER(REPLACE(RQ.question, ' ', '_')) = 'overall_size' THEN
if(RA.num = '', 0, RA.num) END),0) + .0001, 2) AS `raw_avg_overall_size`,
...
... where overall_size would be one of the question types related to the widget and might have "answers" from 5 users like 1,2,2,3,1 to that question for a given widget_id based on the answer options below:
Answers
answer_id
answer_type
num
word
111
overall_size
1
x-large
112
overall_size
2
large
113
overall_size
3
medium
114
overall_size
4
small
115
overall_size
5
x-small
So we would end up with a row that had something like this:
widget_id
average_overall_size
115
1.80
What we can't figure out is then given if we round 1.80 to zero precision we get 2 in this example which is the word value 'large' from our data above. We like to include that in the query output too so that end up with:
widget_id
raw_average_overall_size
average_overall_size
115
1.80
large
The issue is that we do not know the average for the row until the query runs. So how can we then reference the word value for that average answer in the same row when executing the query?
As mentioned we are pivoting into a variable and then run another query for the full execution. So if we join in the pivot section, that subquery looks something like this:
SET #phase_id = 1;
SET SESSION group_concat_max_len = 100000;
SET #SQL = NULL;
SET #NSQL = NULL;
SELECT GROUP_CONCAT(DISTINCT
CONCAT(
'ROUND(IFNULL(AVG(CASE
WHEN LOWER(REPLACE(RQ.short_question, '' '', ''_'')) = ''',
nsq,
''' THEN
if(RA.answer = '''', 0, RA.answer) END),0) + .0001, 2) AS `',
CONCAT('avg_raw_',nsq), '`,
REF.value, -- <- ******* THIS FAILS **** --
ROUND(IFNULL(STDDEV(CASE
WHEN LOWER(REPLACE(RQ.short_question, '' '', ''_'')) = ''',
nsq,
''' THEN RA.answer END), 0) + .0001, 3) AS `',
CONCAT('std_dev_', nsq), '`
'
)
ORDER BY display_order
) INTO #NSQL
FROM (
SELECT FD.ref_value, FD.element_name, RQ.display_order, LOWER(REPLACE(RQ.short_question, ' ', '_')) as nsq
FROM review_questions RQ
LEFT JOIN form_data FD ON FD.id = RQ.form_data_id
LEFT JOIN ref_values RV on FD.ref_value = RV.type
WHERE RQ.phase_id = #phase_id
AND FD.element_type = 'select'
AND RQ.is_active > 0
GROUP BY FD.element_name
HAVING MAX(RV.key_name) REGEXP '^[0-9]+$'
) nq
/****** suggested in 1st answer ******/
LEFT JOIN ref_values REF ON REF.`type` = nq.ref_value
AND REF.key_name = ROUND(CONCAT('avg_raw_',nsq), 0);
So we need the word answer (from the REF join's REF.value field in the above code) in the pivot output, but it fails with 'Unknown column REF.value. If we put REF.value in it's parent query field list, that also fails with the same error.
You'll need to join the table/view/query again to get the 'large' value.
For example:
select a.*, b.word
from (
-- your query here
) a
join my_table b on b.answer_id = a.answer_id
and b.num = round(a.num);
An index on my_table (answer_id, num) will speed up the extra search.
This fails, leading to the default of "2":
LOWER(REPLACE(RQ.question, ' ', '_')) = 'overall_size'
That is because the question seems to be "average_overall_size", not "overall_size".
String parsing and manipulation is the pits in SQL; suggest using the application to handle such.
Also, be aware that you may need a separate subquery to compute aggregate (eg AVG()), else it might not be computed over the set of values you think.
Query into temp table, then join
First query should produce table as follows:
CREATE temp table, temp_average_size
widget_id
average_overall_size
rounded_average_size
115
1.80
2
LEFT JOIN
select s.*, a.word
from temp_average_size s LEFT JOIN answers a
ON (s.rounded_average_size = a.num AND a.answer_type = 'overall_size)
i have a table Transactions that looks similar to this:
id Type Field ObjectId NewValue
1 AddLink HasMember 4567 someDomain/someDirectory/1231
2 AddLink HasMember 4567 someDomain/someDirectory/1232
3 AddLink HasMember 4567 someDomain/someDirectory/1233
4 DeleteLink HasMember 4567 someDomain/someDirectory/1231
The numeric end of "NewValue" is what i am interested in.
In Detail, i need those records where i have a record where type is "AddLink" and where no newer record of type "DeleteLink" exists, i.e. the records with id = 2 or 3 (since 4 deletes 1)
The "ObjectId" as well as the numeric bit of "NewValue" both are IDs of entries of the "tickets" table, and i need the relevant tickets.
i tried this:
SELECT `Tickets`.* FROM `Transactions` AS `addedLinks`
LEFT JOIN `Tickets` ON RIGHT (`addedLinks`.`NewValue`, 4) = `Tickets`.`id`
WHERE `addedLinks`.`Type` = 'AddLink'
AND `addedLinks`.`Field` = 'Hasmember'
AND `addedLinks`.`ObjectId` = '4567'
AND NOT RIGHT (`addedLinks`.`NewValue`, 4) in (
SELECT `Tickets`.* FROM `Transactions` AS `deletedLinks`
LEFT JOIN `Tickets` ON RIGHT (`deletedLinks`.`NewValue`, 4) = `Tickets`.`id`
WHERE `deletedLinks`.`Type` = 'DeleteLink'
AND `addedLinks`.`id` < `deletedLinks`.`id`
AND `deletedLinks`.`Field` = 'Hasmember'
AND `deletedLinks`.`ObjectId` = '4567' )
This gives me:
SQL Error (1241): Operand should contain 1 column(s)
Unless i got something wrong, the problem is
RIGHT (`addedLinks`.`NewValue`, 4)
in the "AND NOT ... in()" statement.
Could anyone point me in the right direction here?
[EDIT]
Thanks to David K-J, the following works:
SELECT `Tickets`.* FROM `Transactions` AS `addedLinks`
LEFT JOIN `Tickets` ON RIGHT (`addedLinks`.`NewValue`, 4) = `Tickets`.`id`
WHERE `addedLinks`.`Type` = 'AddLink'
AND `addedLinks`.`Field` = 'Hasmember'
AND `addedLinks`.`ObjectId` = '5376'
AND NOT (RIGHT (`addedLinks`.`NewValue`, 4)) in (
SELECT `id` FROM `Transactions` AS `deletedLinks`
WHERE `deletedLinks`.`Type` = 'DeleteLink'
AND `addedLinks`.`id` < `deletedLinks`.`id`
AND `deletedLinks`.`Field` = 'Hasmember'
AND `deletedLinks`.`ObjectId` = '5376' )
but i don't understand why?
The problem here is your sub-select, as you are using it to provide the value of an IN clause, your sub-select should only select the id field, i.e. Transactions.* -> Transactions.id
So you end up with:
...
AND NOT (RIGHT (`addedLinks`.`NewValue`, 4)) IN
SELECT id FROM Transactions AS deletedLinks WHERE
...
The reason for this is that IN requires a list to compare with, so foo IN ( 1,2,3,4,5 ). If your subquery is selecting multiple fields, the resulting list is conceptually a list of lists (AoAs) like, [1, 'a'], [2, 'b'], [3, 'c'] and it's going to complain at you =)
Ah that's so complicated and with subquery... make it simpler, will be much faster
CREATE TEMPORARY TABLE `__del_max`
SELECT `NewValue`, MAX(`id`) as id FROM tickets
WHERE type = 'DeleteLink'
GROUP BY NewValue;
CREATE INDEX _nv ON __del_max(`NewValue`)
SELECT * FROM `tickets`
LEFT OUTER JOIN `__del_max` ON tickets.NewValue = __del_max.NewValue AND __del_max.id > tickets.id
WHERE __del_max.id IS NULL
You can have it in single, big join, but it'd be beneficial to have it in TMP table so you can add an index ;)
Were are facing a big problem with string length control in SQL Server 2008.
A brief recap of our system:
import data in a persistent staging area from *.txt file (semicolon as separator), using bulk insert in SQL Server environment;
in PSA table all columns are varchar(MAX);
cleaning operations using insert statement based on a select with multiple where conditions.
The problem we deal with is on a single column type and length, in fact in data warehouse level it has to be numeric and its lengths must not exceed 13 digits.
The select is the following:
select cast(LTRIM(RTRIM(data_giacenza)) as numeric),
LTRIM(RTRIM(codice_socio)),
LTRIM(RTRIM(codice_gln)),
LTRIM(RTRIM(tipo_gln)),
LTRIM(RTRIM(codice_articolo_socio)),
LTRIM(RTRIM(codice_ean_prodotto)),
LTRIM(RTRIM(codice_ecat_prodotto)),
LTRIM(RTRIM(famiglia)),
LTRIM(RTRIM(marca)),
LTRIM(RTRIM(classificazione_liv_1)),
LTRIM(RTRIM(classificazione_liv_2)),
LTRIM(RTRIM(classificazione_liv_3)),
LTRIM(RTRIM(classificazione_liv_4)),
LTRIM(RTRIM(modello)),
LTRIM(RTRIM(descrizione_articolo)),
cast(LTRIM(RTRIM(giacenza)) as numeric),
cast(LTRIM(RTRIM(acquistato)) as numeric), 'X' FROM psa_stock a
where EXISTS
(
SELECT 0
FROM(
SELECT
data_giacenza
,codice_socio
,codice_gln
,codice_articolo_socio
FROM psa_stock
where
LEN(LTRIM(RTRIM(data_giacenza))) = 8 and LEN(LTRIM(RTRIM(codice_socio))) = 3
and LEN(LTRIM(RTRIM(codice_gln))) = 13 and LEN(LTRIM(RTRIM(tipo_gln))) = 3
and LEN(LTRIM(RTRIM(codice_articolo_socio))) <= 15
and (LEN(LTRIM(RTRIM(codice_ean_prodotto))) <= 13 or LEN(ISNULL(codice_ean_prodotto, '')) = 0)
and (LEN(LTRIM(RTRIM(codice_ecat_prodotto))) = 9 or LEN(ISNULL(codice_ecat_prodotto, '')) = 0)
and LEN(LTRIM(RTRIM(famiglia))) = 2
and (LEN(LTRIM(RTRIM(marca))) <= 20 or LEN(ISNULL(marca, '')) = 0)
and (LEN(LTRIM(RTRIM(modello))) <= 30 or LEN(ISNULL(modello, '')) = 0)
and (LEN(LTRIM(RTRIM(descrizione_articolo))) <= 50 or LEN(ISNULL(descrizione_articolo, '')) = 0)
and LEN(LTRIM(RTRIM(giacenza))) <= 5
and LEN(LTRIM(RTRIM(acquistato))) <= 5
and (LEN(LTRIM(RTRIM(classificazione_liv_1))) <= 15 or LEN(ISNULL(classificazione_liv_1, '')) = 0)
and (LEN(LTRIM(RTRIM(classificazione_liv_2))) <= 15 or LEN(ISNULL(classificazione_liv_2, '')) = 0)
and (LEN(LTRIM(RTRIM(classificazione_liv_3))) <= 15 or LEN(ISNULL(classificazione_liv_3, '')) = 0)
and (LEN(LTRIM(RTRIM(classificazione_liv_4))) <= 15 or LEN(ISNULL(classificazione_liv_4, '')) = 0)
and ISNUMERIC(ltrim(rtrim(REPLACE(data_giacenza, ' ', '')))) = 1
and ISNUMERIC(ltrim(rtrim(REPLACE(codice_gln, ' ', '')))) = 1
and ISNUMERIC(LTRIM(RTRIM(REPLACE(giacenza, ' ', '')))) = 1 and charindex(',', giacenza) = 0
and ISNUMERIC(LTRIM(RTRIM(REPLACE(acquistato, ' ', '')))) = 1
and ISNUMERIC(ltrim(rtrim(REPLACE(codice_ean_prodotto, ' ', '')))) = 1
and ISNUMERIC(ltrim(rtrim(REPLACE(codice_ecat_prodotto, ' ', '')))) = 1
and codice_socio in (select codice_socio from ana_socio)
and tipo_gln in (select tipo from ana_gln)
and codice_gln in (select codice_gln from dw_key_gln)
group by
data_giacenza
,codice_socio
,codice_gln
,codice_articolo_socio
having COUNT (*) = 1
) b
where
a.data_giacenza = b.data_giacenza and
a.codice_articolo_socio = b.codice_articolo_socio and
a.codice_socio = b.codice_socio and
a.codice_gln = b.codice_gln)
The critical field is codice_ean_prodotto.
In fact, it allows to consider also values as SEAGAT7636490026751,NE20000003039,NE20000002168 which are not numeric and, the first, overlap maximum dimensions.
As result, the insert statement gives back
String o binary data would be truncated
error and fails the insertion.
Thanks in advance! I look forward your help!!!
Enrico
Have you tried executing that query, and adding codice_ean_prodotto = 'NE20000003039' to the where clause? Be sure that these are the actual field that is giving you the problem. If the select returns a row with that where clause, then something's wrong with the logic.
I'm leaning towards your having COUNT (*) = 1 clause in the EXISTS subquery - is it possible to have more than one record for these specific keys? As long as your PK is made up of those 4 fields (data_giacenza, codice_articolo_socio, codice_socio, codice_gln), you shouldn't need the GROUP BY and HAVING clauses at all. If you're not joining on your primary key, it could be that it is the culprit.
It's hard to tell without seeing your data model, however.
I figured out what was wrong.
In the inner select, we were excluding from the selection all records not respecting format constraints and duplication (the meaning of count(*)=1), extracting only the PK of the destination table.
But when selecting with PK we retrieve also those record that were duplicates, but were excluded by the format constraint, leading the insert to error due to dimension issues.
Now I divided the steps:
Duplicates lookup and deletion
Selection with format constraints
It works!
I've created this temporal table in my store procedure, as you can see I have more than 1 records for the same ID:
#tmpTableResults
TmpInstallerID TmpConfirmDate TmpConfirmLocalTime
============== ============== ===================
173 2011-11-08 11:45:50
278 2011-11-04 09:06:26
321 2011-11-08 13:21:35
321 2011-11-08 11:44:54
483 2011-11-08 11:32:00
483 2011-11-08 11:31:59
645 2011-11-04 10:03:15
645 2011-11-04 07:03:15
That is the result of the query to create #tmpTableResults
DECLARE #tmpTableResults TABLE
(
TmpInstallerID int,
TmpConfirmDate date,
TmpConfirmLocalTime time
)
DECLARE #tmpTableQuery VarChar(800)
SET #tmpTableQuery = 'select FxWorkorder.INSTALLERSYSID, FxWorkorder.CONFIRMDATE, FxWorkorder.CONFIRMLOCALTIME from FxWorkorder
join install on FxWorkorder.INSTALLERSYSID = install.sysid
join RouteGroupWorkarea on FxWorkorder.WORKAREAGROUPSYSID = RouteGroupWorkarea.IWORKAREA_ID
join RoutingGroup on RouteGroupWorkarea.IRG_ID = RoutingGroup.IRG_IDENTITY
where FxWorkorder.SCHEDULEDDATE > = #StartDate and FxWorkorder.SCHEDULEDDATE <= #EndDate
and FxWorkorder.Jobstatus <> "Unassign"
and FxWorkorder.Jobstatus <> "Route"
and install.FOXTELCODE <> ""
and FxWorkorder.CONFIRMLOCALTIME is not null
and FxWorkorder.CONFIRMDATE <> ""
group by FxWorkorder.INSTALLERSYSID, FxWorkorder.CONFIRMDATE, FxWorkorder.CONFIRMLOCALTIME
order by FxWorkorder.INSTALLERSYSID, FxWorkorder.CONFIRMDATE, FxWorkorder.CONFIRMLOCALTIME desc '
INSERT INTO #tmpTableResults EXEC(#tmpTableQuery)
I'm creating another query to get data from another table and only the first record from the temporal table for the same INSTALLERSYSID
SELECT RoutingGroup.SDESCRIPTION, FxWorkorder.INSTALLERSYSID, FxWorkOrder.JOBSTATUS, Install.FOXTELCODE,
install.NAME, FxWorkOrder.ScheduledDate,
count(*) as TotalJobs, COUNT(CONFIRMDATE) as ConfirmedJobs,
(select TmpInstallerID, TmpConfirmDate, TmpConfirmLocalTime from #tmpTableResults where TmpInstallerID = FxWorkorder.INSTALLERSYSID)
from FxWorkorder
join install on fxworkorder.INSTALLERSYSID = install.sysid
join RouteGroupWorkarea on FxWorkOrder.WORKAREAGROUPSYSID = RouteGroupWorkarea.IWORKAREA_ID
join RoutingGroup on RouteGroupWorkarea.IRG_ID = RoutingGroup.IRG_IDENTITY
where FxWorkorder.SCHEDULEDDATE > = #StartDate and FxWorkorder.SCHEDULEDDATE <= #EndDate
and FxWorkOrder.Jobstatus <> 'Unassign'
and FxWorkOrder.Jobstatus <> 'Route'
and Install.FOXTELCODE <> ''
group by RoutingGroup.SDESCRIPTION,FxWorkOrder.INSTALLERSYSID, FxWorkOrder.JOBSTATUS, Install.FOXTELCODE,install.NAME, FxWorkOrder.ScheduledDate,FxWorkOrder.WORKAREAGROUPSYSID
When I tried to save the sp I got the error
"Only one expression can be specified in the select list when the subquery is not introduced with EXISTS."
I can't see why I got this error. But if I run the query in sql that works. Can someone see the error?
I don't know how your second query works for you ‘in sql’ (where is that supposed to be? do you mean SSMS = SQL Server Management Studio?), but I'm sure it cannot possibly work in any version of SQL Server that exists at the moment. It's because of this subquery in the SELECT list:
(select TmpInstallerID, TmpConfirmDate, TmpConfirmLocalTime from #tmpTableResults where TmpInstallerID = FxWorkorder.INSTALLERSYSID)
The thing is, every expression in the SELECT clause should be scalar, but this subquery returns a row of more than one value. Even if it's only one row, it is illegal there, because it returns several columns. The subquery in that context should return no more than one value, i.e. it should be one column and the result produced should contain either no rows or just one.
You could try this query instead (although I'm not entirely sure without knowing more details about your schema):
SELECT
RoutingGroup.SDESCRIPTION,
FxWorkorder.INSTALLERSYSID,
FxWorkOrder.JOBSTATUS,
Install.FOXTELCODE,
install.NAME, FxWorkOrder.ScheduledDate,
count(*) as TotalJobs, COUNT(CONFIRMDATE) as ConfirmedJobs,
tmp.TmpInstallerID,
tmp.TmpConfirmDate,
tmp.TmpConfirmLocalTime
from FxWorkorder
join install on fxworkorder.INSTALLERSYSID = install.sysid
join RouteGroupWorkarea on FxWorkOrder.WORKAREAGROUPSYSID = RouteGroupWorkarea.IWORKAREA_ID
join RoutingGroup on RouteGroupWorkarea.IRG_ID = RoutingGroup.IRG_IDENTITY
join #tmpTableResults tmp ON tmp.TmpInstallerID = FxWorkorder.INSTALLERSYSID
where FxWorkorder.SCHEDULEDDATE > = #StartDate
and FxWorkorder.SCHEDULEDDATE <= #EndDate
and FxWorkOrder.Jobstatus <> 'Unassign'
and FxWorkOrder.Jobstatus <> 'Route'
and Install.FOXTELCODE <> ''
group by
RoutingGroup.SDESCRIPTION,
FxWorkOrder.INSTALLERSYSID,
FxWorkOrder.JOBSTATUS,
Install.FOXTELCODE,install.NAME,
FxWorkOrder.ScheduledDate,
FxWorkOrder.WORKAREAGROUPSYSID
tmp.TmpInstallerID,
tmp.TmpConfirmDate,
tmp.TmpConfirmLocalTime
That is, I added one more join, the one to #tmpTableResults, as well as added the columns you were trying to pull to the SELECT clause and to the GROUP BY clause.
Also, if I were you I would consider using short aliases for tables, like this:
SELECT
…
wo.INSTALLERSYSID,
wo.JOBSTATUS,
…
from FxWorkorder wo
join …
That might make your queries more readable.
Given the following tables:
Orders (OrderID, OrderStatus, OrderNumber)
OrderItems(OrderItemID, OrderID, ItemID, OrderItemStatus)
Orders: 2537 records
Order Items: 1319 records
I have created indexes on
Orders(OrderStatus)
OrderItems(OrderID)
OrderItems(OrderItemStatus)
I have the following SQL statement (generated by LinqToSql) which when executed, has:
- duration = 8789
- reads = 7809.
exec sp_executesql N'SELECT COUNT(*) AS [value]
FROM [dbo].[Orders] AS [t0]
WHERE ([t0].[OrderStatus] = #p0) OR (EXISTS(
SELECT NULL AS [EMPTY]
FROM [dbo].[OrderItems] AS [t1]
WHERE ([t1].[OrderID] = [t0].[OrderID]) AND ([t1].[OrderItemStatus] = #p1)
))',N'#p0 nvarchar(2),#p1 nvarchar(2)',#p0=N'KE',#p1=N'KE'
Is there anything else which I can do to make it faster?
make all those nvarchars parameters varchars if the columns in the table are varchars
))',N'#p0 varchar(2),#p1 varchar(2)',#p0=N'KE',#p1=N'KE'
See also here: sp_executesql causing my query to be very slow
Count on a single index rather than *
This might generate some better sql.
IQueryable<int> query1 =
from oi in db.OrderItems
where oi.OrderItemStatus == theItemStatus
select oi.OrderID;
IQueryable<int> query2 =
from o in db.Orders
where o.OrderStatus == theOrderStatus
select o.OrderID;
IQueryable<int> query3 = query1.Concat(query2).Distinct();
int result = query3.Count();