mail notifications for failed scheduled procedures - MySQL - mysql

I have a couple of stored procedures that are scheduled to run at night because execution times are too long to execute them intraday. Usually that works fine.
However, I (and sometimes others) regularly need to adjust lines. As the whole procedure has well over 1000 lines it has happened that people unintentionally made small syntax errors (forgetting an alias for example).
Is there a way to trigger some kind of error notification (preferably by mail) in case the procedure is not executed completely? I've done some research but could not find anything - so I guess its not possible. Or is it?

Use
DECLARE ... HANDLER
https://dev.mysql.com/doc/refman/8.0/en/declare-handler.html Statement to detect sql errors and then Insert into a table or update an entry
Sending email could be eventually done by user defined functions http://www.mysqludf.org/about.html
But it is not recommended to add functions like email to your database

Related

PLSQL not executing consistently from Access ODBC call

I have a Microsoft Access frontend database that interfaces with an Oracle backend. I have a variety of PL/SQL procedures that get triggered through processes in the application and I have never had an issue like this. This particular procedure does not seem to execute some of the DML statements in the procedure when the procedure is run through the app (the Access database interface); HOWEVER, when I run the same procedure from my SQL Developer environment it runs perfectly every time. More specifically, it does not seem to execute some delete statements that I have in the procedure despite the fact that I have COMMIT after all of them.
Here are a few other details that may be making the difference:
The table that gets modified by the PL/SQL procedure initially gets data from a SQL Loader replace job that moves data from the client back to this staging table
This stage table has an auto increment primary key that is created from a before insert trigger on the table. There does not seem to be any issue moving records back to this table with SQL Loader or any kind of integrity constraint failure. This all happens in the application BEFORE the stored procedure is called.
This particular table is also linked through the ODBC connection in the Access database, as it is used by a bound form after the above procedure is run. I have tested to see whether the form is just somehow not reflecting the data in the backend table, but it is correctly reflecting.
Again, if I run the process in the application I get the incorrect results. Immediately after I do this I run the same exact procedure from my SQL Developer and it corrects it every time.
So I believe I finally figured this out. The issue was a timing issue between SQL Loader moving the data back to the Oracle staging table and the PL/SQL procedure getting triggered in the application. Since I have a trigger before insert on my stage table in this case, I could not use direct load (direct = true) in the bat file that kicks off my SQL Loader job. As a result, the records take longer to move to the backend and in this case my PL/SQL procedure was getting triggered prior to all of the records getting moved to the staging table. This explains the intermittent nature of the problem that was driving me nuts. I solved it by making sure the record counts in my delimited file that SQL Loader was moving back matched the record count on my stage table before I triggered the procedure to run. Lesson learned.

Getting message Review the SQL script to be applied on the database

I am getting the following message while creating a stored procedure in MySQL Workbench:
"Review the SQL script to be applied on the database"
I have several tables inside the database but the stored procedure I am writing will be
used only for one table. Since, the SQL script of stored procedure is gonna apply on the whole database, I am wondering if it's gonna affect other tables as well? I don't want other tables to get disturbed because of this script.
Please provide your inputs as I am doing this for the first time.
Question #2:
Why do I see "DELIMITER $$" as the first statement while creating a routine before the following statement?
CREATE PROCEDURE `mydatabase`.`myfirstroutine` ()
BEGIN
Thanks
1) MySQL Workbench offers the option to review the generated SQL script before it is sent to the server. This way you can check it for possible problems.
2) The DELIMITER command is usually necessary to switch the current delimiter that ends a single statement (which is by default a semicolon) to something else because the stored procedure code itself needs the semicolon to separate individual commands. However the sp code must be sent as a whole to the server.
A few more details: the DELIMITER keywword is a client keyword only, that means the server doesn't know it and doesn't need it. It's an invention for clients to properly separate sql commands before sending them to the server (you cannot send a list of commands to a server, only individual statements).
In MySQL Workbench however, especially in the object editors where you edit e.g. the sp text, adding the DELIMITER command is essentially nonsense, because there's only this sp code, hence nothing to separate. This might disappear in future version but for now just ignore it.

Deadlock found when trying to get lock; try restarting transaction

I am sending bulk invitations to my users from my website. For that I am passing comma separated string which contains emails to my stored procedure in mysql. In this stored procedure one while loop parses each email (using substring() separated by comma) and checks existing database and then Insert to table if it's absent or generates email link with guid if that email already exists. The process is working fine for small batches (eg. below 200-250 emails), but if batch is larger (250+ emails), whole process stucks and next requests are getting deadlock errors (original error is: "Deadlock found when trying to get lock; try restarting transaction"). So, I have planned to do a while loop at my javascript or c# file (programming language file) and send each email to store procedure.
In above scenario number of mysql connections would increase and max connection error might be occurre.
So, I want to ask, what is the best method to do this kind of jobs with mysql?
I think giving emails to procedure one at a time is a correct solution yet you don't need to make new connection or even new request for each item. Most languages support prepared statement execution (and here's the answer on how to use them in C#).
The deadlocks in turn can be cause by your own code but without a snips of it it's hard to tell. Maybe your procedure isn't re-entrant or the data can be accessed from some other location.

Raise an event from MySQL and handle it from VB.NET (or something similar)?

I'm working with MySQL 5.1.39 and Visual Studio 2008 and connecting both with MySQL Connector Net 6.1.2.
What I'd like to do is once a MySqlConnection object is created, be able to handle the "event raised" when a field in a specific row in a given table is updated.
I mean, when that value in that table has been manually changed or modified from any other application, I'd like to receive a signal in my opened VB.NET application. Until now, I do it from opened VB.NET application checking that table every X seconds, but I wonder if it could be done in a better way.
Many thanks for your attention and time.
Ideally, there is the SIGNAL construct, which you can use to field SQL logic errors, but that is not available until MySQL 5.5. It would be best to upgrade to 5.5, if at all possible.
EDIT: There isn't really a good solution for this before 5.5. The TRIGGER works for getting the updates, but not for sending them outside of the database. Be careful, though, as this doesn't work if you're updating through FOREIGN KEY actions, such as CASCADE or UPDATE, as TRIGGERs are not called for these actions. So watch out for that.
DELIMITER $$
CREATE TRIGGER my_trigger_name AFTER UPDATE ON my_table_name
FOR EACH ROW BEGIN
CALL my_on_update_procedure(NEW.entry_name, NEW.whatever_else)
END $$
DELIMITER ;
What my_on_update_procedure does is up to you. Your solution is probably the best bet for 5.1.39 (I would not recommend locking due to scalability issues), but 5.5 would give you the SIGNAL construct, which is exactly what you want (so upgrade!).
I never worked with that but I think "TRIGGER" could be what you're looking for.
http://dev.mysql.com/doc/refman/5.1/en/create-trigger.html
My first thought was to use a database trigger to trigger some sort of notification: message through email, MOM or anything else. Googling didn't turn much up though. I found one approach based on notification through locks: linky. Could be a sane approach...
Oh, and in that blog post they also talk about MySQL UDFs which lets you execute arbitary code when triggers fire. Apparently there is libs to various languages. There is also a duplicate question here on stackoverflow. Cheers

When a new row in database is added, an external command line program must be invoked

Is it possible for MySQL database to invoke an external exe file when a new row is added to one of the tables in the database?
I need to monitor the changes in the database, so when a relevant change is made, I need to do some batch jobs outside the database.
Chad Birch has a good idea with using MySQL triggers and a user-defined function. You can find out more in the MySQL CREATE TRIGGER Syntax reference.
But are you sure that you need to call an executable right away when the row is inserted? It seems like that method will be prone to failure, because MySQL might spawn multiple instances of the executable at the same time. If your executable fails, then there will be no record of which rows have been processed yet and which have not. If MySQL is waiting for your executable to finish, then inserting rows might be very slow. Also, if Chad Birch is right, then will have to recompile MySQL, so it sounds difficult.
Instead of calling the executable directly from MySQL, I would use triggers to simply record the fact that a row got INSERTED or UPDATED: record that information in the database, either with new columns in your existing tables or with a brand new table called say database_changes. Then make an external program that regularly reads the information from the database, processes it, and marks it as done.
Your specific solution will depend on what parameters the external program actually needs.
If your external program needs to know which row was inserted, then your solution could be like this: Make a new table called database_changes with fields date, table_name, and row_id, and for all the other tables, make a trigger like this:
CREATE TRIGGER `my_trigger`
AFTER INSERT ON `table_name`
FOR EACH ROW BEGIN
INSERT INTO `database_changes` (`date`, `table_name`, `row_id`)
VALUES (NOW(), "table_name", NEW.id)
END;
Then your batch script can do something like this:
Select the first row in the database_changes table.
Process it.
Remove it.
Repeat 1-3 until database_changes is empty.
With this approach, you can have more control over when and how the data gets processed, and you can easily check to see whether the data actually got processed (just check to see if the database_changes table is empty).
you could do what replication does: hang on the 'binary log'. setup your server as a 'master server', and instead of adding a 'slave server', run mysqlbinlog. you'll get a stream of every command that modifies your database.
step in 'between' the client and server: check MySQLProxy. you point it to your server, and point your client(s) to the proxy. it lets you interpose Lua scripts to monitor, analyze or transform any SQL command.
I think it's going to require adding a User-Defined Function, which I believe requires recompilation:
MySQL FAQ - Triggers: Can triggers call an external application through a UDF?
I think it's really a MUCH better idea to have some external process poll changes to the table and execute the external program - you could also have a column which contains the status of this external program run (e.g. "pending", "failed", "success") - and just select rows where that column is "pending".
It depends how soon the batch job needs to be run. If it's something which needs to be run "sooner or later" and can fail and need to be retried, definitely have an app polling the table and running them as necessary.