PLSQL not executing consistently from Access ODBC call - ms-access

I have a Microsoft Access frontend database that interfaces with an Oracle backend. I have a variety of PL/SQL procedures that get triggered through processes in the application and I have never had an issue like this. This particular procedure does not seem to execute some of the DML statements in the procedure when the procedure is run through the app (the Access database interface); HOWEVER, when I run the same procedure from my SQL Developer environment it runs perfectly every time. More specifically, it does not seem to execute some delete statements that I have in the procedure despite the fact that I have COMMIT after all of them.
Here are a few other details that may be making the difference:
The table that gets modified by the PL/SQL procedure initially gets data from a SQL Loader replace job that moves data from the client back to this staging table
This stage table has an auto increment primary key that is created from a before insert trigger on the table. There does not seem to be any issue moving records back to this table with SQL Loader or any kind of integrity constraint failure. This all happens in the application BEFORE the stored procedure is called.
This particular table is also linked through the ODBC connection in the Access database, as it is used by a bound form after the above procedure is run. I have tested to see whether the form is just somehow not reflecting the data in the backend table, but it is correctly reflecting.
Again, if I run the process in the application I get the incorrect results. Immediately after I do this I run the same exact procedure from my SQL Developer and it corrects it every time.

So I believe I finally figured this out. The issue was a timing issue between SQL Loader moving the data back to the Oracle staging table and the PL/SQL procedure getting triggered in the application. Since I have a trigger before insert on my stage table in this case, I could not use direct load (direct = true) in the bat file that kicks off my SQL Loader job. As a result, the records take longer to move to the backend and in this case my PL/SQL procedure was getting triggered prior to all of the records getting moved to the staging table. This explains the intermittent nature of the problem that was driving me nuts. I solved it by making sure the record counts in my delimited file that SQL Loader was moving back matched the record count on my stage table before I triggered the procedure to run. Lesson learned.

Related

mail notifications for failed scheduled procedures - MySQL

I have a couple of stored procedures that are scheduled to run at night because execution times are too long to execute them intraday. Usually that works fine.
However, I (and sometimes others) regularly need to adjust lines. As the whole procedure has well over 1000 lines it has happened that people unintentionally made small syntax errors (forgetting an alias for example).
Is there a way to trigger some kind of error notification (preferably by mail) in case the procedure is not executed completely? I've done some research but could not find anything - so I guess its not possible. Or is it?
Use
DECLARE ... HANDLER
https://dev.mysql.com/doc/refman/8.0/en/declare-handler.html Statement to detect sql errors and then Insert into a table or update an entry
Sending email could be eventually done by user defined functions http://www.mysqludf.org/about.html
But it is not recommended to add functions like email to your database

Does SQL Server Validate a stored proc before running it?

I have checked many threads now and I cant seem to find an answer for this, and I need to be fairly certain / confident that I am correct in assuming this before replying to a client.
so, as the heading states, Does SQL Server validate a stored procedure before running it?
IE: Even if i have an IF statement that will never meet a certain condition, will the code in that IF statement condition be checked and validated before running?
EDIT: Here is a quick example:
DECLARE #ParamSource VARCHAR(2) = 'V3'
IF #ParamSource = 'V1'
BEGIN
--USE LINKED SERVER HERE WHICH THROWS AN ERROR ABOUT CONNECTIONS
END
IF #ParamSource = 'V3'
BEGIN
--DO MY ACTUAL CODE
END
I will never meet that first condition, but for some reason, my stored proc is trying to validate on run time and keeps erroring.
When a stored procedure is created, it is compiled, which means that each object used in a stored procedure is validated. For all existing objects you also need to have access to them. This will create an execution plan for this stored procedure, and as long as the procedure doesn't change, the execution plan should remain valid. If any table object used in the stored procedure does not exist (table only, not linked servers), the execution plan will not be created at this point, but the procedure will be created if no other errors are found.
In your example, you need access to the linked server object to create your stored procedure. After the creation, if you no longer have access to the linked server, your procedure will still run, but will generate an error if it needs to access the linked server IF #ParamSource = 'V1'. However, if it doesn't hit the linked server IF #ParamSource = 'V3', there will be no error.
Basically, it means that the user that creates the procedure needs to have access to the linked server.

Getting message Review the SQL script to be applied on the database

I am getting the following message while creating a stored procedure in MySQL Workbench:
"Review the SQL script to be applied on the database"
I have several tables inside the database but the stored procedure I am writing will be
used only for one table. Since, the SQL script of stored procedure is gonna apply on the whole database, I am wondering if it's gonna affect other tables as well? I don't want other tables to get disturbed because of this script.
Please provide your inputs as I am doing this for the first time.
Question #2:
Why do I see "DELIMITER $$" as the first statement while creating a routine before the following statement?
CREATE PROCEDURE `mydatabase`.`myfirstroutine` ()
BEGIN
Thanks
1) MySQL Workbench offers the option to review the generated SQL script before it is sent to the server. This way you can check it for possible problems.
2) The DELIMITER command is usually necessary to switch the current delimiter that ends a single statement (which is by default a semicolon) to something else because the stored procedure code itself needs the semicolon to separate individual commands. However the sp code must be sent as a whole to the server.
A few more details: the DELIMITER keywword is a client keyword only, that means the server doesn't know it and doesn't need it. It's an invention for clients to properly separate sql commands before sending them to the server (you cannot send a list of commands to a server, only individual statements).
In MySQL Workbench however, especially in the object editors where you edit e.g. the sp text, adding the DELIMITER command is essentially nonsense, because there's only this sp code, hence nothing to separate. This might disappear in future version but for now just ignore it.

How to return SQL statement that caused an exception from a stored procedure to SSIS

We have a SSIS package that calls a stored procedure through an EXECUTE SQL TASK component. The stored procedure contains a LOT of different pieces of sql code that gets build dynamically and then executed via exec strSQL within the stored procedure. The whole system is built that way and we cannot redesigned at this point. The problem is that when something fails within the stored procedure is hard to figure out from SSIS what SQL statement from the stored procedure caused the exception/failure. What we have right now and is working is the package onError event with code to read the System::ErrorDescription which is helpful to display the error in SSIS and then send an email with the error, this is working. What I'm looking for to add is to have a System Variable or some other way to display the actual SQL (the one that caused the exception within the stored procedure) in SSIS to I can include that in the email. Any ideas? Thanks.
I have a solution. Table variables are not roled back in a catch block and rollback statement.
So put the sql statements before they run into a table varaible with an nvarchar (max) datatype. Make sure your proc uses try catch blocks and transactions. In the catch block, perform a rollback if need be and then insert the contents of the table variable and a datetime to a logging table. NOw you have a record of exactly what queries were run. You can also create a separate table varaible to store the data you are attempting to insert or update if that is also an issue.
When you run a package by using F5 and a SQL statement fails you can check the execution results tab, but unfortunately this only shows the first line or two of your SQL statement.
Instead of running the package by using F5, run it using Crtl+F5. This will open a terminal window and run the package as though it was called from the command line. As each task runs it will output log information, if the task uses a SQL statement and it fails it will output the full SQL statement.
Ctrl+F5 is called 'Start Without Debugging' yet I always think it is a better to debug a package.

When a new row in database is added, an external command line program must be invoked

Is it possible for MySQL database to invoke an external exe file when a new row is added to one of the tables in the database?
I need to monitor the changes in the database, so when a relevant change is made, I need to do some batch jobs outside the database.
Chad Birch has a good idea with using MySQL triggers and a user-defined function. You can find out more in the MySQL CREATE TRIGGER Syntax reference.
But are you sure that you need to call an executable right away when the row is inserted? It seems like that method will be prone to failure, because MySQL might spawn multiple instances of the executable at the same time. If your executable fails, then there will be no record of which rows have been processed yet and which have not. If MySQL is waiting for your executable to finish, then inserting rows might be very slow. Also, if Chad Birch is right, then will have to recompile MySQL, so it sounds difficult.
Instead of calling the executable directly from MySQL, I would use triggers to simply record the fact that a row got INSERTED or UPDATED: record that information in the database, either with new columns in your existing tables or with a brand new table called say database_changes. Then make an external program that regularly reads the information from the database, processes it, and marks it as done.
Your specific solution will depend on what parameters the external program actually needs.
If your external program needs to know which row was inserted, then your solution could be like this: Make a new table called database_changes with fields date, table_name, and row_id, and for all the other tables, make a trigger like this:
CREATE TRIGGER `my_trigger`
AFTER INSERT ON `table_name`
FOR EACH ROW BEGIN
INSERT INTO `database_changes` (`date`, `table_name`, `row_id`)
VALUES (NOW(), "table_name", NEW.id)
END;
Then your batch script can do something like this:
Select the first row in the database_changes table.
Process it.
Remove it.
Repeat 1-3 until database_changes is empty.
With this approach, you can have more control over when and how the data gets processed, and you can easily check to see whether the data actually got processed (just check to see if the database_changes table is empty).
you could do what replication does: hang on the 'binary log'. setup your server as a 'master server', and instead of adding a 'slave server', run mysqlbinlog. you'll get a stream of every command that modifies your database.
step in 'between' the client and server: check MySQLProxy. you point it to your server, and point your client(s) to the proxy. it lets you interpose Lua scripts to monitor, analyze or transform any SQL command.
I think it's going to require adding a User-Defined Function, which I believe requires recompilation:
MySQL FAQ - Triggers: Can triggers call an external application through a UDF?
I think it's really a MUCH better idea to have some external process poll changes to the table and execute the external program - you could also have a column which contains the status of this external program run (e.g. "pending", "failed", "success") - and just select rows where that column is "pending".
It depends how soon the batch job needs to be run. If it's something which needs to be run "sooner or later" and can fail and need to be retried, definitely have an app polling the table and running them as necessary.