Find declare statement SQL Profiler - sql-server-2008

I have a multiple EXEC statements being run, but the DECLARE is nowhere to be found. Below is what I'm seeing. I'm seeing thousands of these.
RPC:Completed | exec sp_execute 14,69 | TDS0X00000000030000001400730070005F0065007800650063007500740065001400000003000600380069006E0074000E0000001400000003000600380069006E
I do see declare statements for other Stored Procedures, just not for a few and they are the ones that I really need to analyze.
Thank you.

In addition to RPC completed (or maybe instead of), you should probably be capturing:
SP:Starting
SP:Completed
And if you want to see individual statements within those procedures, or the surrounding batches:
SP:StmtCompleted
SQL:StmtCompleted
SQL:BatchCompleted
That said, depending on how the statement is getting called and passed in, you may not always be able to catch every individual statement.

Related

How can I define separate temporary table source name in a procedure?

I'm declaring a cursor in a stored procedure with following;
declare cur1 cursor for select * from tmp_01;
Here, my temporary table source is tmp_01.
The source table name is dynamically generated.
I'm wondering if there is a way that I could define the same cursor with different source for each instance when the stored procedure called.
For example,
on first run,
declare cur1 cursor for select * from tmp_01;
on second run,
declare cur1 cursor for select * from tmp_02;
The main problem I'm having is, I'm experiencing some strange behavior with the cursor when called with multiple queries using mysqli_multiquery, that is not clear to me. when I run each query separately, everything works fine. I'm not sure whether it's because something like parallel query processing.
All I'm trying to achieve is, declaring a unique source name for the cursor, on each procedure call.
Can anyone please point me in a right direction to achieve this?
No, the DECLARE CURSOR statement must take a fixed SQL query as its argument, and therefore the table name must be fixed. If your table name is variable, you cannot use a cursor in a stored routine.
It's not clear from your question what purpose you have for using multiquery, or what is the "strange behavior" you have seen. I can guess that it has to do with the fact that each call to a stored procedure returns multiple result sets, so it gets confusing if you try to call multiple procedures in a multiquery. If you are looping over multiple result sets, it becomes unclear when one procedure is done with its result sets and the next procedure starts returning its result sets.
Regardless, I don't recommend using multiquery in any case. There is hardly ever a good reason to use it. There's no performance or functionality advantage of using multiquery. I recommend you just run each call individually, and do not use multiquery.
For that matter, I also avoid using MySQL stored procedures. They have poor performance and scalability, the code is harder to write than any other programming languages, there is no compiler, no debugger, no support for packages, no standard library of utility procedures, the documentation is thin, etc. I understand that in the Oracle or Microsoft SQL Server community, it is customary to write lots of stored procedures, but in MySQL, I write my application logic in a client programming language such as Java, Go, or Python.

mySQL: Stored procedures are more secure than queries?

I have a website using mySQL database and I want to do common tasks like add users, modify their info, etc. I can do it perfectly with regular queries. Im using prepared statements to increment security.
Should I use stored procedures to increment the security or the results will be the same? I though that may be using stored procedures I can restrict the direct interaction that a possible attacker could have with the real query. I'm wrong?
I guess it would depend on what language youre using. Using a prepared statement with a sql string that contains all of the sql to be executed, or using a prepared statement with a sql string that executes a stored procedure are going to be about equivalent in most languages. The language should take care of the security around the prepared statement. C# for example will validate the input, so sql injection vulnerabilities are greatly reduced unless your prepared statement is written so poorly that feeding it bad (but expected, ie, 1 vs 0) variables will dramatically change the result set. Other languages may not provide the same level of validation though, so there may be an advantage depending on exactly what your stored proc looks like.
Using a stored procedure is better for maintainability, but there are not many scenarios where its going to provide any sort of change in security level, assuming the program is properly designed to begin with. The only example i can think of off the top of my head would be a stored procedure that takes raw sql strings from user input, and then executes that sql against the db. This is actually less secure than using a prepared statement unless you went to great lengths to validate the acceptable input, in which case you better have a really good reason for using such a stored proc in the first place.
Basically, what I'm saying boils down to the fact that you're going to need to read the documentation for your language about prepared statements, and determine what vulnerabilities, if any, using prepared statements may have, and whether or not those can be eliminated in your specific scenario by switching to a prepared statement that calls out a stored procedure instead of executing a sql query directly.
The results would be the same (assuming that you set your stored procedure up right).
there appears to be a pretty good write up on it here. Though I would never suggest you try to escape user input yourself. (They mention this as option 3)

Alternative to use cursors in SQL Server stored procedure

It's not like that I am having trouble executing my cursors which are enclosed in a stored procedure. But I want to find more efficient way to achieve the same.
Here it goes.
Stored procedure : RawFeed.sql (runs every 5 minutes)
Set #GetATM = Cursor For
Select DeviceCode,ReceivedOn
From RawStatusFeed
Where CRWR=2 AND Processed=0
Order By ReceivedOn Desc
Open #GetATM
Fetch Next
From #GetATM Into #ATM,#ReceivedOn
While ##FETCH_STATUS = 0
Begin
Set #RawFeed=#ATM+' '+Convert(VarChar,#ReceivedOn,121)+' '+'002307'+' '+#ATM+' : Card Reader/Writer - FAULTY '
Exec usp_pushRawDataAndProcess 1,#RawFeed
Fetch Next
From #GetATM Into #ATM,#ReceivedOn
End
Set #GetATM = Cursor For
Select DeviceCode,ReceivedOn
From RawStatusFeed
Where CRWR=0 AND Processed=0
Order By ReceivedOn Desc
Open #GetATM
Fetch Next
From #GetATM Into #ATM,#ReceivedOn
While ##FETCH_STATUS = 0
Begin
Set #RawFeed=#ATM+' '+Convert(Varchar,#ReceivedOn,121)+' '+'002222'+' '+#ATM+' : Card Reader/Writer - OK '
Exec usp_pushRawDataAndProcess 1,#RawFeed
Fetch Next
From #GetATM Into #ATM,#ReceivedOn
End
Likewise I have 10 more SET statements which differ on WHERE condition parameter & string enclosed in #RawFeed variable.
For each row I get I execute another stored procedure on that particular row.
My question is
Is there any better way to achieve the same without using cursors?
Variable #RawFeed Contains following string which is input to usp_pushRawDataAndProcess stored procedure. now this will divide whole string and do some operation like INSERT,UPDATE,DELETE on some tables.
WE JUST CAN NOT PROCESS MORE THAN 1 STRING IN usp_pushRawDataAndProcess
NMAAO226 2012-09-22 16:10:06.123 002073 NMAAO226 : Journal Printer - OK
WMUAO485 2012-09-22 16:10:06.123 002222 WMUAO485 : Card Reader/Writer - OK
SQL Server, like other relational databases, is desgined to, and is pretty good at, working on sets of data.
Databases are not good at procedural code where all the opportunities for optimization are obscured from the query processing engine.
Using RawStatusFeed to store some proprietry request string and then processing a list of those one by one, is going to be ineffiencnt for database code. This might make the inserts very fast for the client, and this might be very important, but it comes at a cost.
If you break the request string down on insert, or better still, before insert via a specialised SP call, then you can store the required changes in some intermediate relational model, rather than a list of strings. Then, every so often, you can process all the changes at once with one call to a stored procedure. Admittedly, it would probably make sense for that stored procedure to contain several query statements. However, with the right indexes and statistics the query processing engine will able to make an efficient execution plan for this new stored procedure.
The exact details of how this should be achieved depend on the exact details of the RawStatusFeed table and the implementation of usp_pushRawDataAndProcess. Although this seems like a rewrite, I don't imagine the DeviceCode column is that complicated.
So, the short answer is certainly yes but, I'd need to know what usp_pushRawDataAndProcess does in detail.
The signature of the usp_pushRawDataAndProcess SP is acting as a bottle neck.
If you can't change usp_pushRawDataAndProcess and and won't create a set based alternative then you are stuck with the bottle neck.
So, rather than removing the bottle neck you could take another tack. Why not make more concurrent instances of the bottle neck to feed the data through.
If you are using SQL Server 2005 or above you could use some CLR to perform numerous instances of usp_pushRawDataAndProcess in parallel.
Here is a link to a project I used before to do something similar.
I had always disliked cursors because of their slow performance. However, I found I didn't fully understand the different types of cursors and that in certain instances, cursors are a viable solution.
When you have a business problem that can only be solved by processing one row at a time, then a cursor is appropriate.
So to improve performance with the cursor, change the type of cursor you are using. Something I didn't know was, if you don't specify which type of cursor you are declaring, you get the Dynamic Optimistic type by default, which is the one that is the slowest for performance because it's doing lots of work under the hood. However, by declaring your cursor as a different type, say a static cursor, it has very good performance.
See these articles for a fuller explanation:
The Truth About Cursors: Part I
The Truth About Cursors: Part II
The Truth About Cursors: Part III
I think the biggest con against cursors is performance, however, not laying out a task in a set based approach would probably rank second. Third would be readability and layout of the tasks as they usually don't have a lot of helpful comments.
The best alternative to a cursor I've found is to rework the logic to take a set based approach.
SQL Server is optimized to run the set based approach. You write the query to return a result set of data, like a join on tables for example, but the SQL Server execution engine determines which join to use: Merge Join, Nested Loop Join, or Hash Join. SQL Server determines the best possible joining algorithm based upon the participating columns, data volume, indexing structure, and the set of values in the participating columns. So it generally the best approach in performance over the procedural cursor approach.
Here is an article on Cursors and how to avoid them. It also discusses the alternatives to cursors.
Alernates for CURSOR in SQL server
1.While loop
2.Recursive CTE
Alernates for CURSOR in SQL server
1. Use temp table. create any column ID as identity column.
2. Use while loop to perform the operation.

mySQL C API multiple statements

So I'm building a C program that connects to a mySQL database. Everything worked perfectly. Then, to save on number of queries, I decided that I would like to execute 10 statements at a time. I set the "CLIENT_MULTI_STATEMENTS" flag in the connection, and separated my statements by semicolons.
When I execute the first batch of 10 statements, it succeeds and mysql_real_query() returns a 0.
When I try the second batch though, it returns a "1" and doesn't work. Nowhere can I find what this "1" error code means, so I was hoping someone may have run into this problem before.
Please note that these are all UPDATE statements, and so I have no need for result sets, it's just several straight-up calls to mysql_real_query().
It's not clear from the documentation whether the errors this function can cause are returned or not, but it should be possible to obtain the actual error using mysql_error().
My guess is that you still have to loop through the result sets whether you're interested in them or not.`
Are these prepared statements? If that's the case then you can't use CLIENT_MULTI_STATEMENTS.
Also, note (from http://dev.mysql.com/doc/refman/5.5/en/c-api-multiple-queries.html) that:
After handling the result from the
first statement, it is necessary to
check whether more results exist and
process them in turn if so. To support
multiple-result processing, the C API
includes the mysql_more_results() and
mysql_next_result() functions. These
functions are used at the end of a
loop that iterates as long as more
results are available. Failure to
process the result this way may result
in a dropped connection to the server.
You have to walk over all the results, regardless of whether or not you care about the values.

Is there any way to debug SQL Server 2008 query?

Is there any way to debug a SQL Server 2008 query?
Yes. You can use the T-SQL debugger:
http://technet.microsoft.com/en-us/library/cc646008.aspx
Here is how you step through a query: http://technet.microsoft.com/en-us/library/cc646018.aspx
What exactly do you mean by debug?
Are you seeing incorrect data?
Are you getting unexpected duplicate rows?
What I usually do is start with a known set of data, usually one or two rows if possible, and comment out all joins and where conditions.
Introduce each additional element in your query one at a time starting with joins.
At each step, you should know how many records you are expecting.
As soon as you introduce something like a join or where condition that does not match your prediction, you know you have found the problem statement.
If it is a stored procedure with variables and such, you can always PRINT the values of your variables at different points.
If you want to only execute to a particular point in a stored procedure, then you may RETURN at any time and halt processing.
If you have temp tables that must get destroyed between executions while debugging your procedure, a common trick I use is to create a label like-
cleanup:
then at whatever point I want to bail, I can goto cleanup (I know goto is horrible, but it works great when debugging sprocs)
Yes, you need to set a breakpoint.
Frankly I find the debugging to be virtually useless. I often don't want to see the variables, but rather the records I would be inserting to a table or updating or deleting.
What I do is this when I havea complex sp to debug.
First I create a test variable. I set it =1 when I want to test. This will ensure that All actions iteh transaction are rolled back at the end (don't want to actually change the datbase until you are sure the proc is doing what you want.) by making sure the commit statement requires the test variable to be set to 0.
At the end of the proc I generally have a if test = 1 begin
END statement and between the begin and end, I put the select statments for all the things I want to see the values of. This might include any table variables or temp tables, the records in a particular table after the insert, the records I deleted or whatever else I feel I need to see. If I am testing mulitple times, I might comment out references to tables that I know are right and concentrate only on the ones I've changed that go around.
Now I can see what the effect of my proc is and the changes are rolled back in case they weren't right. To actually commit the changes (and not see the intermediate steps) , I simply change the value of the test variable.
If I use dynamic SQL, I also havea debug variable that instead of executing the dynamic SQl simply prints the results out to the screen. I find all this far more useful in debugging a complex script than breakpoints that show me the value of variables.