Functions with in Stored Procedure- SQL 2008 - sql-server-2008

I have a SQl Query which returns 30,000+ records, with 15 columns . I am passing a NVARCHR(50) parameter for the store procedure.
At the moment I am using stored procedure to get the data from the database.
As there are 30,000+ records to be fetched and its taking time, What would be the suggestions for me.
Do I get any performance benefits if I use functions with in the stored procedure(to get individual columns based on the parameter I am passing)
Please let me know, if you need more info on the same.
Thank you

I wouldn't use functions unless there is no other way to get your data.
From SQL2005 you have extra functionality in stored procedures such as WITH and CROSS APPLY clauses that makes easier certain restrictions we had in previous versions of SQL that could be solved using UDF's.
In terms of performance, the stored procedure will generally be quicker, but it depends how optimized is your query and/or how the tables have been designed, maybe you could give us an example of what you are trying to achieve.

Functions would probably not be the way to go. 30000 rows isn't that many, depending on how conplex the query is. You would be better to focus on optimising the SQL in the proc, or on checking that your indexing is setup correctly.

Related

improving performance of mysql stored function

I have a stored function I ported from SQL Server that is super slow for MySql. Since my application needs to support both SQL Server and MySql ( via ODBC drivers ), I sort of require this function. The stored function is normally used in the "where" clause, but it is slow even when it is only in the "select" clause.
Are there any tricks for improving the performance of functions? Maybe any helpful tools that point out potential stored function problems or slow points?
They are complex functions. So while one statement in the function is relatively quick, everything put together is slow.
Also, views are used inside these functions. Don't know how much that impacts things. The views themselves seem to run in a reasonable time.
I know I am not giving much specifics, but I am more looking for a performance tool or some high level stored function performance tips.
I did see various posts online advising people not to use functions in "where" clauses, but I am sort of stuck since I want to go back and forth between SQL Server and MySql with the same executable, and the functions are to complex to embed directly into the SQL in my application.
Edit based on Gruber answer, to clarify things with an example:
Here is an example query:
SELECT count(*) FROM inv USE INDEX(inv_invsku) WHERE invsku >= 'PBM116-01' AND WHMSLayer_IS_LOC_ACCESSIBLE( '', 'PFCSYS ', invloc ) > 0 ORDER BY invsku;
If I get rid of the call to IS_LOC_ACCESSIBLE, it is considerably faster. IS_LOC_ACCESSIBLE is just one of 3 such functions. It has a bunch of IF statements, queries of other tables and views, etc. That is why I call it "complex", because of all that extra logic and conditional paths.
You can try using a query profiler, for instance the one included in dbForge Studio, and try to figure out exactly what the bottlenecks are.
Maybe your tables aren't properly indexed?
You can sometimes achieve good improvements in MySQL (and SQL Server) by moving things around in your queries, creating identical output but changing the internal execution path. For instance, try removing the complex WHERE clause, use the complex code as output from a wrapped SELECT statement which you can use WHERE on subsequently.

How to optimise paging in SQL Server when you order by a non indexed field

I have read and followed instructions here:
What is an efficient method of paging through very large result sets in SQL Server 2005? and what becomes clear is I'm ordering by a non-indexed field - this is because it's a generated field from calcuations - it does not exist in the database.
I'm using the row_number() technique and it works pretty well. My problem is that my stored procedure does some pretty big joins on a fair bit of data and I'm ordering by the results of these joins. I realise that each time I page it has to call the entire query again (to ensure correct ordering).
What I would like (without pulling entire result set into the client code and paging there) is that once it SQL Server got the whole result set it could then page through that
Is there any built-in way to achieve that? - I thought that views might do this but I can't find info on this.
EDIT: Indexed Views will not work for me as I need to pass in parameters. Anyone got any more ideas - I think either I have to use memcached or have a service that builds indexes in the background. I just wish there was a way for SQL Server to get that table and hold onto it whilst it is paged...
I am not very familiar with paging, and without knowing the logic behind your procedure, I can only guess you'd benefit from IndexedViews or #TemporaryTables with Indexes.
You mentionned you were ordering by a non-indexed field that is generated, that information combined with the fact that your procedure calls the entire query every time would lead me to believe you could make that query an IndexedView. You'd get better performance from accessing it multiple times and it would also enable you to add an Index onto the field you're ordering by.
You could also use a #TemporaryTable if it somehow stays alive during your paging requests... Insert the dataset you are working with in a #TemporaryTable, you can then create an index with T-SQL on the generated colum.
Indexed Views for SQL Server 2005: http://technet.microsoft.com/en-us/library/cc917715.aspx

Mysql query vs stored procedure performance

I have tried a query on mysql, the query had called other functions.
Then I added the very same query in a stored procedure and then executed the procedure on mysql.
The execution time of the normal query was less by 1 sec than the procedure.
Weren't it supposed to be the opposite because procedures get cached.
Please explain to me if I'm missing something here. I appreciate your knowledge sharing a lot.
Regards
Stored Procedure is parsed and compiled only once when it's first created in the database while a text query needs to be parsed and compiled every time it's executed. and this is the difference and it's tiny for a limited number of calls.
If you are trying to compare just for a single query, then query is best way to opt, but for large queries, you should use stroed procedures.
I don't know for mysql, but for other database engines like Oracle, the queries may be cached and linked to the connection once compiled. Even the data may be cached in fact.
Did you try to launch the query and the stored procedure several times each? It is mandatory to have a correct estimation of the performance.

de-normalization, weighted aggregates for updated tables in MySQL

this time I got a more general question. Should I use multiple views rather than stored procedures for weighted aggregation of data, if the original data is updated periodically?
Basically I have a local MySQL database that is updated periodically by importing the same kind of data (tables) from a bigger transaction database.
The local database is used for statistical analysis. Thus I de-normalize (basically aggregate) the data locally for use with statistical software packages. So far I used stored procedures because I felt it was easier to handle (and arranged more clearly) when weighting schemes (basically other tables containing weights that are multiplied with variables) came into play.
Though the disadvantage of stored procedures is that I have the run all of 'em again when the tables are populated with new data. Obviously I am not a DBA... So don´t shy away from stating the obvious :) What´s the best approach to handle this kind of scenario? SP or views ? Or something completely different?
thx for any suggestions in advance!
It depends (that's the generic answer to any "general" questions, isn't it? :) ). You need to evaluate the tradeoffs to see what the best solution is for your needs.
Views are basically just query re-writing (in MySQL), so using a view will be performing the aggregation/denormalization every time the query is run. That may make your queries slower that you would like. Also, if your procedures are really complicated, maybe it's not practical to try to put that logic into a view.
Stored procedures do the work once, so queries will be faster. But then your updates won't show up automatically. So I think the answer depends on how often the data changes, how often queries are run, and how important the performance of the queries is.
As for alternative suggestions, you could also run your stored procedures using events, if your data updates are regular, and you are just trying to save yourself from the manual task of running the procedures.
Another option is to have denormalization/aggregation tables that are updated with triggers. As you update your data in the source table, the triggers will automatically keep the aggregate tables current.
Here is a link to documentation for stored procedures, views, triggers, and events.

Stored Procedure slower than LINQ query?

I was doing some testing and straight LINQ-to-SQL queries run at least 80% faster than if calling stored procedures via the LINQ query
In SQL Server profiler a generic LINQ query
var results = from m in _dataContext.Members
select m;
took only 19 milliseconds as opposed to a stored procedure
var results = from m in _dataContext.GetMember(userName)
select m;
(GetMember being the stored procedure) doing the same query which took 100 milliseconds
Why is this?
Edit:
The straight LINQ looks like this in Profiler
SELECT
[t1].[MemberID], [t1].[Aspnetusername], [t1].[Aspnetpassword],
[t1].[EmailAddr], [t1].[DateCreated],
[t1].[Location], [t1].[DaimokuGoal], [t1].[PreviewImageID],
[t1].[value] AS [LastDaimoku],
[t1].[value2] AS [LastNotefied],
[t1].[value3] AS [LastActivityDate], [t1].[IsActivated]
FROM
(SELECT
[t0].[MemberID], [t0].[Aspnetusername], [t0].[Aspnetpassword],
[t0].[EmailAddr], [t0].[DateCreated], [t0].[Location],
[t0].[DaimokuGoal], [t0].[PreviewImageID],
[t0].[LastDaimoku] AS [value], [t0].[LastNotefied] AS [value2],
[t0].[LastActivityDate] AS [value3], [t0].[IsActivated]
FROM
[dbo].[Members] AS [t0]) AS [t1]
WHERE
[t1].[EmailAddr] = #p0
The stored procedure is this
SELECT Members.*
FROM Members
WHERE dbo.Members.EmailAddr = #Username
So you see the stored procedure query is much simpler.. but yet its slower.... makes no sense to me.
1) Compare like with like. Perform exactly the same operation in both cases, rather than fetching all values in one case and doing a query in another.
2) Don't just execute the code once - do it lots of times, so the optimiser has a chance to work and to avoid one-time performance hits.
3) Use a profiler (well, one on the .NET side and one on the SQL side) to find out where the performance is actually differing.
One thing that might make it slower is the select *. Usually a query is faster if columns are specified, And in particular if the LINQ query is not using all the possible columns inthe query, it will be faster than select *.
I forgot, the proc could also have parameter sniffing issues.
A noted in the comments some of this is that you are not comparing apples to apples. You are trying to compare two different queries, thus getting different results.
If you want to try and determine performance you would want to compare the SAME queries, with the same values etc.
Also, you might try using LinqPad to be able to see the generated SQL to potentially identify areas that are causing slowness in response.
The * will extend the time it takes to run the query by quite a bit. Also, the straight SQL from LINQ you see in profiler is bracketing ([]) all of the object names - this will trim more time off the query execution time for the LINQ query.
May I add to John Skeet's answer, that when running code several time please remember clean up any query cache.
I can suggest using 'EXPLAIN' with both queries: it seems that MySQL creates query execution plan for a query and SP differently. For SP it complies before substituting parameters with their values, and therefore it does not use indexes, that used in case of hard-coded or substituted parameter. Here is another question about different run times for SP and straight query from SO with query plan data given for both cases.