How to do the optimization by using udf in select query - sql-server-2008

we are running a job every day. It is taking around 2 hours to run. we are trying to optimization the performance.
In view we wrote the 5 UDF funcitons which I used to lookup data and return values based on the condition in select query.
`select [ECX_Version_UID],[Object Type],[AMT_ECXHeader],[Description],LMUser,[Created Date],
--UDF Functions
ISNULL(eng_kpi.udf_ProjectX_Get_LastModifiedDate(EC_Number),'') as LastModifiedDate,
ISNULL(eng_kpi.udf_ProjectX_Get_CEImpact(EC_Number),'') as [Change category]
ISNULL(eng_kpi.udf_ProjectX_Get_SPS_References (EC_Number),'') as [SPS References],
ISNULL(eng_kpi.udf_ProjectX_Get_ESW_References(EC_Number),'') as [ESW References],
ISNULL(eng_kpi.udf_ProjectX_Get_Prerequisite_ECRs(EC_Number),'') as [Prerequisite ECRs]
From #Loctable`
If i run view the with out udf function it is taking 20 seconds to run.
Is it any impact to write the UDF in select Query.

Yes, Scalar Valued function impacts the performance of the query negatively. From SQL 2005 onwards, SQL Server introduced new features like APPLY (CROSS or OUTER) statements and CTE (in case you have to use loops), by using those you can easily replace your UDFs. This will definitely improve your query's performance.

Related

EclipseLink: ConnectionPools and native queries

We are using Spring (Eclipselink) on MariaDB. Our SQL via ORM result in a long lasting DB query. Therefore I need to refine it into a nativequery - which is no big deal by itself. Nevertheless the Resultset is limited by a LIMIT and I need a total counter for all found records. For querying the total counter I found for MariaSQL the following solution.
My question:
Is it save to query the two SQL command separately or should I send them once with a UNION combined?
The question arises due to the fact that between my query and the SELECT FOUND_ROWS() the another query might interfere (with a request from the same Microservice) and dilute the result.
If both queries are done in the same transaction, the MVCC of INNODB should guarantee, that the results will not be influenced by other transactions.
see: https://dev.mysql.com/doc/refman/8.0/en/innodb-multi-versioning.html

improving performance of mysql stored function

I have a stored function I ported from SQL Server that is super slow for MySql. Since my application needs to support both SQL Server and MySql ( via ODBC drivers ), I sort of require this function. The stored function is normally used in the "where" clause, but it is slow even when it is only in the "select" clause.
Are there any tricks for improving the performance of functions? Maybe any helpful tools that point out potential stored function problems or slow points?
They are complex functions. So while one statement in the function is relatively quick, everything put together is slow.
Also, views are used inside these functions. Don't know how much that impacts things. The views themselves seem to run in a reasonable time.
I know I am not giving much specifics, but I am more looking for a performance tool or some high level stored function performance tips.
I did see various posts online advising people not to use functions in "where" clauses, but I am sort of stuck since I want to go back and forth between SQL Server and MySql with the same executable, and the functions are to complex to embed directly into the SQL in my application.
Edit based on Gruber answer, to clarify things with an example:
Here is an example query:
SELECT count(*) FROM inv USE INDEX(inv_invsku) WHERE invsku >= 'PBM116-01' AND WHMSLayer_IS_LOC_ACCESSIBLE( '', 'PFCSYS ', invloc ) > 0 ORDER BY invsku;
If I get rid of the call to IS_LOC_ACCESSIBLE, it is considerably faster. IS_LOC_ACCESSIBLE is just one of 3 such functions. It has a bunch of IF statements, queries of other tables and views, etc. That is why I call it "complex", because of all that extra logic and conditional paths.
You can try using a query profiler, for instance the one included in dbForge Studio, and try to figure out exactly what the bottlenecks are.
Maybe your tables aren't properly indexed?
You can sometimes achieve good improvements in MySQL (and SQL Server) by moving things around in your queries, creating identical output but changing the internal execution path. For instance, try removing the complex WHERE clause, use the complex code as output from a wrapped SELECT statement which you can use WHERE on subsequently.

SQL Query taking more time to query the SQL SERVER 2008 r2

Below is the sql query taking ~47 sec to query the SQL Server 2008 r2 Database for the first time.
but for the 2nd time it would query with in 2-3 sec. i dont know wt would be the issue. i am new to sql server. Please help me, Thanks in advance.
SELECT DISTINCT(SectionName) , Name, DispalyName, TypeVal
FROM ReportData
WHERE FileVersion = 1
ORDER BY SectionName;
in the above query ReportData table contains 21,54,514 number of lines, but it would query only 241 number of lines becuase might be DISTINCT key word.
The query execution plan gets cached when you run it the first time. When you run the same query a second time, therefore, it uses that plan, which removes the overhead in creating a new plan. This reduces the time to execute the query and return results.
You can read up on this on MSDN.
Secondly, SQL Server provides a Buffer Cache to reduce the impact caused by having to read from disk everytime. When a query is made, the read data is stored in the buffer cache. If you are running the same query, I believe that it will read from the buffer cache rather than going to disk, which would reduce the execution time considerably.
On a side note, I think 47 seconds may be a reasonable time given that you have more than 2 million records in the table, and your query does filtering and ordering as well. You might want to index the columns that you use for filtering or grouping.
e.g. You can create an index on SectionName which you specify in your DISTINCT clause like so:
CREATE INDEX idx_SectionName on ReportData(SectionName)

Any command in mysql equivalent to Oracle's autotrace for performance turning

In oracle sql plus, while doing performance testing, I would do
set autotrace traceonly:
which would display the query plan and statistics without printing the actual results. Is there anything equivalent in mysql?
No, there's no equivalent available in MySQL, at least not in the community edition.
MySQL does not implement the kind of "instrumentation" that Oracle has in its code; so there's no equivalent to an event 10046 trace.
You can preface your SELECT statement with the EXPLAIN keyword, and that will produce output with information about the execution plan that MySQL would use to run the statement, but that's just an estimate, and not a monitoring of the actual execution.
You can also enable the slow query log on the server, to capture SQL statements that take longer than long_query_time seconds to execute, but that really only identifies the long running queries. That would give you the SQL text, along with elapsed time and a count of rows examined.
To get the query plan, just add EXPLAIN to the beginning of a SELECT query.
EXPLAIN SELECT * FROM table
It also estimates the number of rows to be read if that's what statistics you're talking about.

Stored Procedure slower than LINQ query?

I was doing some testing and straight LINQ-to-SQL queries run at least 80% faster than if calling stored procedures via the LINQ query
In SQL Server profiler a generic LINQ query
var results = from m in _dataContext.Members
select m;
took only 19 milliseconds as opposed to a stored procedure
var results = from m in _dataContext.GetMember(userName)
select m;
(GetMember being the stored procedure) doing the same query which took 100 milliseconds
Why is this?
Edit:
The straight LINQ looks like this in Profiler
SELECT
[t1].[MemberID], [t1].[Aspnetusername], [t1].[Aspnetpassword],
[t1].[EmailAddr], [t1].[DateCreated],
[t1].[Location], [t1].[DaimokuGoal], [t1].[PreviewImageID],
[t1].[value] AS [LastDaimoku],
[t1].[value2] AS [LastNotefied],
[t1].[value3] AS [LastActivityDate], [t1].[IsActivated]
FROM
(SELECT
[t0].[MemberID], [t0].[Aspnetusername], [t0].[Aspnetpassword],
[t0].[EmailAddr], [t0].[DateCreated], [t0].[Location],
[t0].[DaimokuGoal], [t0].[PreviewImageID],
[t0].[LastDaimoku] AS [value], [t0].[LastNotefied] AS [value2],
[t0].[LastActivityDate] AS [value3], [t0].[IsActivated]
FROM
[dbo].[Members] AS [t0]) AS [t1]
WHERE
[t1].[EmailAddr] = #p0
The stored procedure is this
SELECT Members.*
FROM Members
WHERE dbo.Members.EmailAddr = #Username
So you see the stored procedure query is much simpler.. but yet its slower.... makes no sense to me.
1) Compare like with like. Perform exactly the same operation in both cases, rather than fetching all values in one case and doing a query in another.
2) Don't just execute the code once - do it lots of times, so the optimiser has a chance to work and to avoid one-time performance hits.
3) Use a profiler (well, one on the .NET side and one on the SQL side) to find out where the performance is actually differing.
One thing that might make it slower is the select *. Usually a query is faster if columns are specified, And in particular if the LINQ query is not using all the possible columns inthe query, it will be faster than select *.
I forgot, the proc could also have parameter sniffing issues.
A noted in the comments some of this is that you are not comparing apples to apples. You are trying to compare two different queries, thus getting different results.
If you want to try and determine performance you would want to compare the SAME queries, with the same values etc.
Also, you might try using LinqPad to be able to see the generated SQL to potentially identify areas that are causing slowness in response.
The * will extend the time it takes to run the query by quite a bit. Also, the straight SQL from LINQ you see in profiler is bracketing ([]) all of the object names - this will trim more time off the query execution time for the LINQ query.
May I add to John Skeet's answer, that when running code several time please remember clean up any query cache.
I can suggest using 'EXPLAIN' with both queries: it seems that MySQL creates query execution plan for a query and SP differently. For SP it complies before substituting parameters with their values, and therefore it does not use indexes, that used in case of hard-coded or substituted parameter. Here is another question about different run times for SP and straight query from SO with query plan data given for both cases.