Does SQL Server Validate a stored proc before running it? - sql-server-2008

I have checked many threads now and I cant seem to find an answer for this, and I need to be fairly certain / confident that I am correct in assuming this before replying to a client.
so, as the heading states, Does SQL Server validate a stored procedure before running it?
IE: Even if i have an IF statement that will never meet a certain condition, will the code in that IF statement condition be checked and validated before running?
EDIT: Here is a quick example:
DECLARE #ParamSource VARCHAR(2) = 'V3'
IF #ParamSource = 'V1'
BEGIN
--USE LINKED SERVER HERE WHICH THROWS AN ERROR ABOUT CONNECTIONS
END
IF #ParamSource = 'V3'
BEGIN
--DO MY ACTUAL CODE
END
I will never meet that first condition, but for some reason, my stored proc is trying to validate on run time and keeps erroring.

When a stored procedure is created, it is compiled, which means that each object used in a stored procedure is validated. For all existing objects you also need to have access to them. This will create an execution plan for this stored procedure, and as long as the procedure doesn't change, the execution plan should remain valid. If any table object used in the stored procedure does not exist (table only, not linked servers), the execution plan will not be created at this point, but the procedure will be created if no other errors are found.
In your example, you need access to the linked server object to create your stored procedure. After the creation, if you no longer have access to the linked server, your procedure will still run, but will generate an error if it needs to access the linked server IF #ParamSource = 'V1'. However, if it doesn't hit the linked server IF #ParamSource = 'V3', there will be no error.
Basically, it means that the user that creates the procedure needs to have access to the linked server.

Related

PLSQL not executing consistently from Access ODBC call

I have a Microsoft Access frontend database that interfaces with an Oracle backend. I have a variety of PL/SQL procedures that get triggered through processes in the application and I have never had an issue like this. This particular procedure does not seem to execute some of the DML statements in the procedure when the procedure is run through the app (the Access database interface); HOWEVER, when I run the same procedure from my SQL Developer environment it runs perfectly every time. More specifically, it does not seem to execute some delete statements that I have in the procedure despite the fact that I have COMMIT after all of them.
Here are a few other details that may be making the difference:
The table that gets modified by the PL/SQL procedure initially gets data from a SQL Loader replace job that moves data from the client back to this staging table
This stage table has an auto increment primary key that is created from a before insert trigger on the table. There does not seem to be any issue moving records back to this table with SQL Loader or any kind of integrity constraint failure. This all happens in the application BEFORE the stored procedure is called.
This particular table is also linked through the ODBC connection in the Access database, as it is used by a bound form after the above procedure is run. I have tested to see whether the form is just somehow not reflecting the data in the backend table, but it is correctly reflecting.
Again, if I run the process in the application I get the incorrect results. Immediately after I do this I run the same exact procedure from my SQL Developer and it corrects it every time.
So I believe I finally figured this out. The issue was a timing issue between SQL Loader moving the data back to the Oracle staging table and the PL/SQL procedure getting triggered in the application. Since I have a trigger before insert on my stage table in this case, I could not use direct load (direct = true) in the bat file that kicks off my SQL Loader job. As a result, the records take longer to move to the backend and in this case my PL/SQL procedure was getting triggered prior to all of the records getting moved to the staging table. This explains the intermittent nature of the problem that was driving me nuts. I solved it by making sure the record counts in my delimited file that SQL Loader was moving back matched the record count on my stage table before I triggered the procedure to run. Lesson learned.

Call a Stored Procedure in SSIS Data Source

I am trying to call a stored procedure in SSIS OLE- DB DataSource (My Datasource would be SQL Server 2012).
I tried a procedure call SQL statement under SQL Command option but when I click the preview button I am getting an error.
Please guide me how to resolve this error. I goolged but nothing works for me.
I think the issue you are having is in SSIS often takes the first Select statement it finds and tries to validate it for column names, this happens especially with very big procedures. The trick I have found to get this to work is, right off the bat throw something like :
IF 1 = 0
Begin
Select all columns you want
END
This code will never get execute but it hints SSIS to make those columns the ones in the data flow. Just be sure to update this list as you update your last select.

Getting message Review the SQL script to be applied on the database

I am getting the following message while creating a stored procedure in MySQL Workbench:
"Review the SQL script to be applied on the database"
I have several tables inside the database but the stored procedure I am writing will be
used only for one table. Since, the SQL script of stored procedure is gonna apply on the whole database, I am wondering if it's gonna affect other tables as well? I don't want other tables to get disturbed because of this script.
Please provide your inputs as I am doing this for the first time.
Question #2:
Why do I see "DELIMITER $$" as the first statement while creating a routine before the following statement?
CREATE PROCEDURE `mydatabase`.`myfirstroutine` ()
BEGIN
Thanks
1) MySQL Workbench offers the option to review the generated SQL script before it is sent to the server. This way you can check it for possible problems.
2) The DELIMITER command is usually necessary to switch the current delimiter that ends a single statement (which is by default a semicolon) to something else because the stored procedure code itself needs the semicolon to separate individual commands. However the sp code must be sent as a whole to the server.
A few more details: the DELIMITER keywword is a client keyword only, that means the server doesn't know it and doesn't need it. It's an invention for clients to properly separate sql commands before sending them to the server (you cannot send a list of commands to a server, only individual statements).
In MySQL Workbench however, especially in the object editors where you edit e.g. the sp text, adding the DELIMITER command is essentially nonsense, because there's only this sp code, hence nothing to separate. This might disappear in future version but for now just ignore it.

Automating tasks on more than one SQL Server 2008 database

We host multiple SQL Server 2008 databases provided by another group. Every so often, they provide a backup of a new version of one of the databases, and we run through a routine of deleting the old one, restoring the new one, and then going into the newly restored database and adding an existing SQL login as a user in that database and assigning it a standard role that exists in all of these databases.
The routine is the same, except that each database has a different name and different logical and OS names for its data and log files. My inclination was to set up an auxiliary database with a table defining the set of names associated with each database, and then create a stored procedure accepting the name of the database to be replaced and the name of the backup file as parameters. The SP would look up the associated logical and OS file names and then do the work.
This would require building the commands as strings and then exec'ing them, which is fine. However, the stored procedure, after restoring a database, would then have to USE it before it would be able to add the SQL login to the database as a user and assign it to the database role. A stored procedure can't do this.
What alternative is there for creating an automated procedure with the pieces filled in dynamically and that can operate cross-database like this?
I came up with my own solution.
Create a job to do the work, specifying that the job should be run out of the master database, and defining one Transact-SQL step for it that contains the code to be executed.
In a utility database created just for the purpose of hosting objects to be used by the job, create a table meant to contain at most one row, whose data will be the parameters for the job.
In that database, create a stored procedure that can be called with the parameters that should be stored for use by the job (including the name of the database to be replaced). The SP should validate the parameters, report any errors, and, if successful, write them to the parameter table and start the job using msdb..sp_start_job.
In the job, for any statement where the job needs to reference the database to be replaced, build the statement as a string and EXECUTE it.
For any statement that needs to be run in the database that's been re-created, doubly-quote the statement to use as an argument for the instance of sp_executesql IN THAT DATABASE, and use EXECUTE to run the whole thing:
SET #statement = #dbName + '..sp_executesql ''[statement to execute in database #dbName]''';
EXEC (#statement);
Configure the job to write output to a log file.

How to return SQL statement that caused an exception from a stored procedure to SSIS

We have a SSIS package that calls a stored procedure through an EXECUTE SQL TASK component. The stored procedure contains a LOT of different pieces of sql code that gets build dynamically and then executed via exec strSQL within the stored procedure. The whole system is built that way and we cannot redesigned at this point. The problem is that when something fails within the stored procedure is hard to figure out from SSIS what SQL statement from the stored procedure caused the exception/failure. What we have right now and is working is the package onError event with code to read the System::ErrorDescription which is helpful to display the error in SSIS and then send an email with the error, this is working. What I'm looking for to add is to have a System Variable or some other way to display the actual SQL (the one that caused the exception within the stored procedure) in SSIS to I can include that in the email. Any ideas? Thanks.
I have a solution. Table variables are not roled back in a catch block and rollback statement.
So put the sql statements before they run into a table varaible with an nvarchar (max) datatype. Make sure your proc uses try catch blocks and transactions. In the catch block, perform a rollback if need be and then insert the contents of the table variable and a datetime to a logging table. NOw you have a record of exactly what queries were run. You can also create a separate table varaible to store the data you are attempting to insert or update if that is also an issue.
When you run a package by using F5 and a SQL statement fails you can check the execution results tab, but unfortunately this only shows the first line or two of your SQL statement.
Instead of running the package by using F5, run it using Crtl+F5. This will open a terminal window and run the package as though it was called from the command line. As each task runs it will output log information, if the task uses a SQL statement and it fails it will output the full SQL statement.
Ctrl+F5 is called 'Start Without Debugging' yet I always think it is a better to debug a package.

Categories