Stored Procedure slow from Entity Framework because of date parameter - sql-server-2008

I have discovered an interesting issue where a Stored Procedure executes very slowly from Entity Framework. I have already solved the problem, but I would like to hear if anyone can tell me why the solution works.
Issue
I have a Stored Procedure GetLoginCount that receives a #date parameter of type DATETIME. When I execute this Stored Procedure directly on the database it executes within a second. When executing through my application via Entity Framework, it takes around 45 seconds.
I have tried using WITH RECOMPILE on the Stored Procedure and cleared execution plans on the server, to ensure it hadn't cached some slow version of the execution plan that didn't use the correct index.
Fast forward after 2 days of experiments, I found that if I simply put the following in the beginning of my Stored Procedure: DECLARE #date1 DATETIME = #date and use #date1 instead, the Stored Procedure executes in 1 second, also from Entity Framework.
WHY?
I have solved my problem and that's all good and fine, but I need to know why this specific solution works.

Martin Smith gave the correct answer in a comment, but as he hasn't put it as an answer, I'm inserting it here, so I can correctly mark the question as answered:
"Assigning to a variable and using the variable disables parameter sniffing. i.e. SQL Server no longer has a specific date it can look up in the statistics to get selectivity estimates and just guesses as per OPTIMIZE FOR UNKNOWN"
Using OPTIMIZE FOR UNKNOWN indeed solves the problem.

Related

How can I define separate temporary table source name in a procedure?

I'm declaring a cursor in a stored procedure with following;
declare cur1 cursor for select * from tmp_01;
Here, my temporary table source is tmp_01.
The source table name is dynamically generated.
I'm wondering if there is a way that I could define the same cursor with different source for each instance when the stored procedure called.
For example,
on first run,
declare cur1 cursor for select * from tmp_01;
on second run,
declare cur1 cursor for select * from tmp_02;
The main problem I'm having is, I'm experiencing some strange behavior with the cursor when called with multiple queries using mysqli_multiquery, that is not clear to me. when I run each query separately, everything works fine. I'm not sure whether it's because something like parallel query processing.
All I'm trying to achieve is, declaring a unique source name for the cursor, on each procedure call.
Can anyone please point me in a right direction to achieve this?
No, the DECLARE CURSOR statement must take a fixed SQL query as its argument, and therefore the table name must be fixed. If your table name is variable, you cannot use a cursor in a stored routine.
It's not clear from your question what purpose you have for using multiquery, or what is the "strange behavior" you have seen. I can guess that it has to do with the fact that each call to a stored procedure returns multiple result sets, so it gets confusing if you try to call multiple procedures in a multiquery. If you are looping over multiple result sets, it becomes unclear when one procedure is done with its result sets and the next procedure starts returning its result sets.
Regardless, I don't recommend using multiquery in any case. There is hardly ever a good reason to use it. There's no performance or functionality advantage of using multiquery. I recommend you just run each call individually, and do not use multiquery.
For that matter, I also avoid using MySQL stored procedures. They have poor performance and scalability, the code is harder to write than any other programming languages, there is no compiler, no debugger, no support for packages, no standard library of utility procedures, the documentation is thin, etc. I understand that in the Oracle or Microsoft SQL Server community, it is customary to write lots of stored procedures, but in MySQL, I write my application logic in a client programming language such as Java, Go, or Python.

is there a type definition similar to the oracle rowtype in mysql

I am developing a procedure where I need to insert all the columns of one table to another table including other calculation.
I have to fetch record by record, manipulate it and transfer it to another table.
is there a type definition similar to the oracle rowtype in mysql???
Any help like example or any link will be very helpful to me.
Thanks in advance...
No, there is nothing similar. You have to declare a single variable for each column to fetch
(http://www.mssqlforums.com/fetching-entire-row-cursor-t93078.html).
See this post for an example of how to use cursors in SQL-Server: Get Multiple Values in SQL Server Cursor
EDIT: sorry Miljen Mikic, my bad - but for mysql applies the same:
see this post: Conversion of Oracle %ROWTYPE to MySQL
MySQL has no an equivalent of Oracle %ROWTYPE. In MySQL it is necessary to declare a variable for every column.
Unfortunately, no ROWTYPE or the like behaviour on MySQL. Perhaps in a future, but in 2022 this feature remains not developed.
So we must continue declaring tons of variables in order to manage cursor data... :`(
See related link on MySQL dev forum at High Level Architecture tab

SQL Server sp_help : how to limit the number of output windows

Normally successful execution of
sp_help [object_name]
in SQL Server returns a total of 7 output windows with various results out of which normally I am interested in only 2 windows namely the one with all the column information and the one with the constraints.
Is there a way I can tell SQLserver to only display these while formulating the command?
Short answer: no, you can't do this directly because the procedure is written to return that data, and TSQL has no mechanism for accessing specific result sets.
Long answer: but you can easily get the same information from other procedures or directly from the system catalog:
sp_columns, sp_helpconstraint (this is actually called by sp_help) etc.
sys.columns, sys.objects etc.
There's also the option of copying the source code from sp_help and using it as the basis of a new procedure that you create yourself, although personally I would just write it myself from scratch. If you do decide to write your own stored proc, you might find this question relevant too.

Calling T-SQL stored procedure from CLR stored procedure

Very brief background:
We are making use of CLR stored procedures to apply access control, using Active Directory, on query results to restrict what the end user can see accordingly. In a nutshell, this is done by removing rows from a datatable where the user does not satisfy the criteria for access to the result (document in this case).
This filtering was previously done on the client before displaying the results. SQL 2008 and a much more powerful server is the motivation for moving this access filtering off the client.
What I am wondering is, is there any performance benefit to be had from calling the original regular T-SQL stored procedure from the CLR stored procedure equivalent, instead of having 'inline' T-SQL passed into the comand object (which in this case is just the original T-SQL that was made a stored procedure) ? I cannot find anywhere where someone has mentioned this (in part probably because it would be very confusing as an example of CLR SPs, I guess :-) ).
It seems to me that you might, as the T-SQL stored proc has already been optimised and compiled ?
Is anyone able to confirm this for me ?
Hope I've been clear enough. Thanks very much,
Colm.
If your SQL CLR stored procedure does a specific query properly (nicely parametrized) and executes it fairly frequently, then that T-SQL query will be just run once through the whole "determine the optimal execution plan" sequence and then stored in the plan cache of your SQL Server (and not evicted from it any faster than a similar T-SQL stored procedure).
As such, it will be just as "pre-compiled" as your original T-SQL stored procedure. From that point of view, I don't see any benefit.
If you could tweak your SQL statement from within your SQL CLR procedure in such a way that it would actually not even include those rows into the result set that you'll toss out in the end anyway, then your SQL-CLR stored procedure executing a properly parametrized T-SQL query might even be a bit faster than having a standard T-SQL stored procedure return too much data from which you need to exclude some rows again.

MySQL queries are fast when run directly but really slow when run as stored proc

I've been trying to figure out what's wrong with a set of queries I've got and I'm just confused at this point.
It's supposed to be in a stored procedure which gets called by a GUI application.
There's only one "tiny" problem, it's first a simple UPDATE, then an INSERT using a SELECT with a subselect and finally another UPDATE. Running these queries together by hand I get a total execution time of 0.057s, not too shabby.
Now, I try creating a stored procedure with these queries in it and five input variables, I run this procedure and on the first attempt it took 47.096s with subsequent calls to it showing similar execution times (35 to 50s). Running the individual queries from the MySQL Workbench still show execution times of less than 0.1s
There really isn't anything fancy about these queries, so why is the stored procedure taking an eternity to execute while the queries by themselves only take a fraction of a second? Is there some kind of MySQL peculiarity that I'm missing here?
Additional testing results:
It seems that if I run the queries in MySQL Workbench but use variables instead of just putting the values of the variables in the queries it runs just as slow as the stored procedure. So I tried changing the stored procedure to just use static values instead of variables and suddenly it ran blazingly fast. Apparently for some reason using a variable makes it run extremely slow (for example, the first UPDATE query goes from taking approximately 0.98s with three variables to 0.04-0.05s when I use the values of variables directly in the query, regardless of if it's in the stored procedure or running the query directly).
So, the problem isn't the stored procedure, it's something related to my use of variables (which is unavoidable).
I had the same problem. After researching for a while, I found out the problem was the collation issue while MySQL was comparing text.
TL;DR: the table was created in one collation while MySQL "thought" the variable was in another collation. Therefore, MySQL cannot use the index intended for the query.
In my case, the table was created with (latin1, latin1_swedish_ci) collation. To make MySQL to use the index, I had to change the where clause in the stored procedure from
UPDATE ... WHERE mycolumn = myvariable
to
UPDATE ... WHERE mycolumn =
convert(myvariable using latin1) collate latin1_swedish_ci
After the change, the stored procedure looked something like this:
CREATE PROCEDURE foo.'bar'()
BEGIN
UPDATE mytable SET mycolumn1 = variable1
WHERE mycolumn2 =
convert(variable2 using latin1) collate latin1_swedish_ci
END;
where (latin1, latin1_swedish_ci) is the same collation that my tableA was created with.
To check if MySQL uses the index or not, you can change the stored procedure to run an explain statement as followed:
CREATE PROCEDURE foo.'bar'()
BEGIN
EXPLAIN SELECT * FROM table WHERE mycolumn2 = variable2
END;
In my case, the explain result showed that no index was used during the execution of the query.
Note that MySQL may use the index when you run the query alone, but still won't use the index for the same query inside a stored procedure, which maybe because somehow MySQL sees the variable in another collation.
More information on the collation issue can be found here:
http://lowleveldesign.wordpress.com/2013/07/19/diagnosing-collation-issue-mysql-stored-procedure/
Back up link:
http://www.codeproject.com/Articles/623272/Diagnosing-a-collation-issue-in-a-MySQL-stored-pro
I had a similar problem.
Running a mysql routine was horrible slow.
But a colleague helped me.
The problem was that AUTOCOMMIT was true;
So every insert into and select was creating a complete transaction.
Then I run my routine with
SET autocommit=0;
at the beginning and
SET autocommit=1;
at the end. The performance went from nearly 500s to 4s
Since I didn't want to waste too much time trying to figure out why using variables in my stored procedures made them extremely slow I decided to employ a fix some people would consider quite ugly. I simply executed each query directly from the data access layer of my application. Not the prettiest way to do it (since a lot of other stuff for this app uses stored procedures) but it works and now the user won't have to wait 40+ seconds for certain actions as they happen almost instantly.
So, not really a solution or explanation of what was happening, but at least it works.
Upvote for a very interesting and important question. I found this discussion of some of the reasons that a stored procedure might be slow. I'd be interested to see readers' reactions to it.
The main recommendation that I took from the interchange: it helps to add more indexes.
Something that we ran across today that makes procedures slow, even when they run very fast as direct queries, is having parameter (or, presumably, variable) names that are the same as column names. The short version is, don't use a parameter name that is the same as one of the columns in the query in which it will be used. For example, if you had a field called account_id and a parameter named the same, change it to something like in_account_id and your run time can go from multiple seconds to hundredths of a second.