Deadlock found when trying to get lock; try restarting transaction - mysql

I am sending bulk invitations to my users from my website. For that I am passing comma separated string which contains emails to my stored procedure in mysql. In this stored procedure one while loop parses each email (using substring() separated by comma) and checks existing database and then Insert to table if it's absent or generates email link with guid if that email already exists. The process is working fine for small batches (eg. below 200-250 emails), but if batch is larger (250+ emails), whole process stucks and next requests are getting deadlock errors (original error is: "Deadlock found when trying to get lock; try restarting transaction"). So, I have planned to do a while loop at my javascript or c# file (programming language file) and send each email to store procedure.
In above scenario number of mysql connections would increase and max connection error might be occurre.
So, I want to ask, what is the best method to do this kind of jobs with mysql?

I think giving emails to procedure one at a time is a correct solution yet you don't need to make new connection or even new request for each item. Most languages support prepared statement execution (and here's the answer on how to use them in C#).
The deadlocks in turn can be cause by your own code but without a snips of it it's hard to tell. Maybe your procedure isn't re-entrant or the data can be accessed from some other location.

Related

PLSQL not executing consistently from Access ODBC call

I have a Microsoft Access frontend database that interfaces with an Oracle backend. I have a variety of PL/SQL procedures that get triggered through processes in the application and I have never had an issue like this. This particular procedure does not seem to execute some of the DML statements in the procedure when the procedure is run through the app (the Access database interface); HOWEVER, when I run the same procedure from my SQL Developer environment it runs perfectly every time. More specifically, it does not seem to execute some delete statements that I have in the procedure despite the fact that I have COMMIT after all of them.
Here are a few other details that may be making the difference:
The table that gets modified by the PL/SQL procedure initially gets data from a SQL Loader replace job that moves data from the client back to this staging table
This stage table has an auto increment primary key that is created from a before insert trigger on the table. There does not seem to be any issue moving records back to this table with SQL Loader or any kind of integrity constraint failure. This all happens in the application BEFORE the stored procedure is called.
This particular table is also linked through the ODBC connection in the Access database, as it is used by a bound form after the above procedure is run. I have tested to see whether the form is just somehow not reflecting the data in the backend table, but it is correctly reflecting.
Again, if I run the process in the application I get the incorrect results. Immediately after I do this I run the same exact procedure from my SQL Developer and it corrects it every time.
So I believe I finally figured this out. The issue was a timing issue between SQL Loader moving the data back to the Oracle staging table and the PL/SQL procedure getting triggered in the application. Since I have a trigger before insert on my stage table in this case, I could not use direct load (direct = true) in the bat file that kicks off my SQL Loader job. As a result, the records take longer to move to the backend and in this case my PL/SQL procedure was getting triggered prior to all of the records getting moved to the staging table. This explains the intermittent nature of the problem that was driving me nuts. I solved it by making sure the record counts in my delimited file that SQL Loader was moving back matched the record count on my stage table before I triggered the procedure to run. Lesson learned.

mail notifications for failed scheduled procedures - MySQL

I have a couple of stored procedures that are scheduled to run at night because execution times are too long to execute them intraday. Usually that works fine.
However, I (and sometimes others) regularly need to adjust lines. As the whole procedure has well over 1000 lines it has happened that people unintentionally made small syntax errors (forgetting an alias for example).
Is there a way to trigger some kind of error notification (preferably by mail) in case the procedure is not executed completely? I've done some research but could not find anything - so I guess its not possible. Or is it?
Use
DECLARE ... HANDLER
https://dev.mysql.com/doc/refman/8.0/en/declare-handler.html Statement to detect sql errors and then Insert into a table or update an entry
Sending email could be eventually done by user defined functions http://www.mysqludf.org/about.html
But it is not recommended to add functions like email to your database

My way around sql injection

I'm not an expert but I do have a web front processing orders that have data needing to be input for further logins. Instead of using that database, I created another one with an extra column called status. Initially when orders are processed, they are set to 0. The cron job runs every 3 minutes polling this database for all users with status 0. When run, the cron sets the status of all currently processed users to status 1 (so if there are any that do get input during runtime of the script, it will be processed next time which is only 3 minutes).
After the status of all new users is set to 1, just the password and email fields are dumped to a file and then loaded via "LOAD DATA INFILE" back into the real database that users need to log in with their client. there is no web log in form. It is for emails, just using the IMAP client. However, I do use the root account for the cron since I realized I needed to grant all privs to a user for the dumping of data and if that is going to be it, I might as well just use root to update the status column first, then dump the new data to a file, then load it into the new db and go back and delete all users with status 1. It is a simple 4 line script running mysql from the command line.
Is this a safe bet or am I risking something running a root cron every 3 min? I don't see how I can possibly have an issue since I never use root to process the web stuff. I use a separate mysql user with only INSERT privs for the web front to process new orders. Any comments? I feel like this way I can avoid sql injection even though my mysql user still has limited privs, there always might be something I don't know about.
Is this a safe bet or am I risking something
As long as it's simple LOAD DATA INFILE query - no. However,
Instead of using that database, I created another one with an extra column called status.
Such a flying circus is absolutely unnecessary.
It doesn't protect you from injection anyway.
Instead, you have to use prepared statements for ALL the queries in your application.

Deadlock while inserting records

I have application which is sending bulk invitation to my users. for that I am inserting thousands of records in a one table. My stored procedure accept comma separated string from user as a parameter, and then split by comma and parse each email from that string in a loop, and insert individual email as a record in the table.
The main problem is when multiple users when send their request to this stored procedure at the same time, mysql throwing "dead lock" error, because each user connecting with different connection to mysql.
So, my question is what is the proper solution to do this kind of task? or this is problem with my database configuration? I am using Amazon RDS (mysql) large instance. and my user can send 2000 emails at a time. and one more thing, I am not using transaction... commit... rollback.
I have posted this use case as a question earlier, but I didn't get any proper answer. Here is that links :
1) Deadlock found when trying to get lock; try restarting transaction
2) https://stackoverflow.com/questions/19091968/deadlock-found-when-trying-to-get-lock-try-restarting-transaction-2nd-try
Thanks

When a new row in database is added, an external command line program must be invoked

Is it possible for MySQL database to invoke an external exe file when a new row is added to one of the tables in the database?
I need to monitor the changes in the database, so when a relevant change is made, I need to do some batch jobs outside the database.
Chad Birch has a good idea with using MySQL triggers and a user-defined function. You can find out more in the MySQL CREATE TRIGGER Syntax reference.
But are you sure that you need to call an executable right away when the row is inserted? It seems like that method will be prone to failure, because MySQL might spawn multiple instances of the executable at the same time. If your executable fails, then there will be no record of which rows have been processed yet and which have not. If MySQL is waiting for your executable to finish, then inserting rows might be very slow. Also, if Chad Birch is right, then will have to recompile MySQL, so it sounds difficult.
Instead of calling the executable directly from MySQL, I would use triggers to simply record the fact that a row got INSERTED or UPDATED: record that information in the database, either with new columns in your existing tables or with a brand new table called say database_changes. Then make an external program that regularly reads the information from the database, processes it, and marks it as done.
Your specific solution will depend on what parameters the external program actually needs.
If your external program needs to know which row was inserted, then your solution could be like this: Make a new table called database_changes with fields date, table_name, and row_id, and for all the other tables, make a trigger like this:
CREATE TRIGGER `my_trigger`
AFTER INSERT ON `table_name`
FOR EACH ROW BEGIN
INSERT INTO `database_changes` (`date`, `table_name`, `row_id`)
VALUES (NOW(), "table_name", NEW.id)
END;
Then your batch script can do something like this:
Select the first row in the database_changes table.
Process it.
Remove it.
Repeat 1-3 until database_changes is empty.
With this approach, you can have more control over when and how the data gets processed, and you can easily check to see whether the data actually got processed (just check to see if the database_changes table is empty).
you could do what replication does: hang on the 'binary log'. setup your server as a 'master server', and instead of adding a 'slave server', run mysqlbinlog. you'll get a stream of every command that modifies your database.
step in 'between' the client and server: check MySQLProxy. you point it to your server, and point your client(s) to the proxy. it lets you interpose Lua scripts to monitor, analyze or transform any SQL command.
I think it's going to require adding a User-Defined Function, which I believe requires recompilation:
MySQL FAQ - Triggers: Can triggers call an external application through a UDF?
I think it's really a MUCH better idea to have some external process poll changes to the table and execute the external program - you could also have a column which contains the status of this external program run (e.g. "pending", "failed", "success") - and just select rows where that column is "pending".
It depends how soon the batch job needs to be run. If it's something which needs to be run "sooner or later" and can fail and need to be retried, definitely have an app polling the table and running them as necessary.