Seeking an example of a procedure that uses row_count - mysql

I want to write a procedure that will handle the insert of data into 2 tables. If the insert should fail in either one then the whole procedure should fail. I've tried this many different ways and cannot get it to work. I've purposefully made my second insert fail but the data is inserted into the first table anyway.
I've tried to nest IF statements based on the rowcount but even though the data fails on the second insert, the data is still being inserted into the first table. I'm looking for a total number of 2 affected rows.
Can someone please show me how to handle multiple inserts and rollback if one of them fails? A short example would be nice.

If you are using InnoDB tables (or other compatible engine) you can use the Transaction feature of MySQL that allows you to do exactly what you want.
Basically you start the transaction
do the queries checking for the result
If every result is OK you call the CONMIT
else you call the ROLLBACK to void all the queries within the transaction.
You can read and article about with examples here.
HTH!

You could try turning autocommit off. It might be automatically committing your first insert even though you haven't explicitly committed the transaction that's been started:
SET autocommit = 0;
START TRANSACTION
......

Related

Unable to Iterate over rows being inserted during AFTER INSERT trigger - MySQL 5.6

I hope you can offer some words of wisdom on an issue i've been struggling with. I am using a MySQL 5.6 trigger to copy data during inserts into a separate table (not the one i'm inserting into).
I'm also modifying the data as it's being copied and need to compare rows within the insert to each over. Due to lack of support for "FOR EACH STATEMENT", i cannot act on the entire insert dataset while the transaction is still in progress. It appears i can only work on the current row as part of the supported FOR EACH ROW syntax.
Does anybody know a way to overcome this?
thanks!
UPDATE 18/01/18: #solarflare thanks for your answers, I looked into splitting the operation into an insert then a call to a stored procedure. It would work but it's not a path i want to go down as it breaks the atomicity of the operation. I tested the same code on PostgreSQL and it works fine.
It appears that when performing a bulk insert, an AFTER INSERT..FOR EACH ROW trigger in MySQL takes a snapshot of the table as it was before the bulk insert TXN started and allows you to query the snapshot but you cannot query the other rows of the insert (even if they have been inserted).
In postgresql this is not the case, as rows are inserted the trigger can see them, even if the transaction is not fully committed. Have you ever seen this / do you know is this a configurable param in MySQL or is it a design choice?

Do Sql Update Statements run at the same time if requested at the same time?

If two independent scripts call a database with update requests to the same field, but with different values, would they execute at the same time and one overwrite the other?
as an example to help ensure clarity, imagine both of these statements being requested to run at the same time, each by a different script, where Status = 2 is called microseconds after Status = 1 by coincidence.
Update My_Table SET Status = 1 WHERE Status= 0;
Update My_Table SET Status = 2 WHERE Status= 0;
What would my results be and why? if other factors play a roll, expand on them as much as you please, this is meant to be a general idea.
Side Note:
Because i know people will still ask, my situation is using MySql with Google App Engine, but i don't want to limit this question to just me should it be useful to others. I am using Status as an identifier for what script is doing stuff to the field. if status is not 0, no other script is allowed to touch it.
This is what locking is for. All major SQL implementations lock DML statements by default so that one query won't overwrite another before the first is complete.
There are different levels of locking. If you've got row locking then your second update will run in parallel with the first, so at some point you'll have 1s and 2s in your table.
Table locking would force the second query to wait for the first query to completely finish to release it's table lock.
You can usually turn off locking right in your SQL, but it's only ever done if you need a performance boost and you know you won't encounter race conditions like in your example.
Edits based on the new MySQL tag
If you're updating a table that used the InnoDB engine, then you're working with row locking, and your query could yield a table with both 1s and 2s.
If you're working with a table that uses the MyISAM engine, then you're working with table locking, and your update statements would end up with a table that would either have all 1s or all 2s.
from https://dev.mysql.com/doc/refman/5.0/en/lock-tables-restrictions.html (MySql)
Normally, you do not need to lock tables, because all single UPDATE statements are atomic; no other session can interfere with any other currently executing SQL statement. However, there are a few cases when locking tables may provide an advantage:
from https://msdn.microsoft.com/en-us/library/ms177523.aspx (sql server)
An UPDATE statement always acquires an exclusive (X) lock on the table it modifies, and holds that lock until the transaction completes. With an exclusive lock, no other transactions can modify data.
If you were having two separate connections executing the two posted update statements, whichever statement was started first, would be the one that completed. THe other statement would not update the data as there would no longer be records with a status of 0
The short answer is: it depends on which statement commits first. Just because one process started an update statement before another doesn't mean that it will complete before another. It might not get scheduled first, it might be blocked by another process, etc.
Ultimately, it's a race condition: the operation that completes (and commits) last, wins.
Since you have TWO scripts doing the same thing and using different values for the UPDATE, they will NOT run at the same time, one of the scripts will run before even if you think you are calling them at the same time. You need to specify WHEN each script should run, otherwise the program will not know what should be 1 and what should be 2.

mysql non native, select only if table exists

This type of question has been posted a few times, but the solutions offered are not ideal in the following situation. In the first query, I'm selecting table names that I know exist when this first query is executed. Then while looping through them, I want to query the number of records in the selected tables, but only if they still exist. The problem is, during the loop, some of the tables are dropped by another script. For example :
SELECT tablename FROM table
-- returns say 100 tables
while (%tables){
SELECT COUNT(*) FROM $table
-- by the time it gets to the umpteenth table, it's been dropped
-- so the SELECT COUNT(*) fails
}
And, I guess because it's run by cron, it fails fataly, and I get sent an email from cron stating it failed.
DBD::mysql::st execute failed: Table 'xxx' doesn't exist at
/usr/local/lib/perl/5.10.1/Mysql.pm line 175.
Script is using the deprecated Mysql.pm perl module.
Obviously you need to secure table to make sure it won't get deleted before you execute your query. Keep in mind, that if you begin with some kind of table lock, to avoid possible drop - the DROP TABLE query issued from some other place will fail with some lock error, or at least will wait until your SELECT finishes. Dropping a table isn't really often used operation, so with most cases the schema design persists during server operation - what you observe is really rare behaviour. In general, preventing table from being dropped during other query just isn't supported, however, in comments for below document you may find some trick with usage of semaphore tables to achieve it.
http://dev.mysql.com/doc/refman/5.1/en/lock-tables.html
"A table lock protects only against inappropriate reads or writes by other sessions. The session holding the lock, even a read lock, can perform table-level operations such as DROP TABLE. Truncate operations are not transaction-safe, so an error occurs if the session attempts one during an active transaction or while holding a table lock."
"If you need to do things with tables not normally supported by read or write locks (like dropping or truncating a table), and you're able to cooperate, you can try this: Use a semaphore table, and create two sessions per process. In the first session, get a read or write lock on the semaphore table, as appropriate. In the second session, do all the stuff you need to do with all the other tables."
You should be able to protect your perl code from failing by putting it into eval block. Something like that:
eval {
# try doing something with DBD::mysql
};
if ($#) {
# oops, mysql code failed.
# probably need to try it again
}
Or even put this in "while" loop
If you used better server like Postgres, right solution would be to enclose everything into transaction. But, in MySQL dropping table is not protected by transactions.

Alternative To READ UNCOMMITED With FOR UPDATE

We have 2 scripts/mysql connections that are grabbing rows from a table. Once a script grabs some rows, the other script must not be able to access those rows.
What I've got so far, that seems to work is this:
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
START TRANSACTION
SELECT * FROM table WHERE result='new' FOR UPDATE
// Loop over update
UPDATE table SET result='old' WHERE id=...
COMMIT
From what I understand the same connection could read the dirty data, but the other connections shouldn't be able to since the rows are locked. Is this correct?
Also is there a better way of guaranteeing that each row can only be SELECT one time with both scripts running?
edit:
Oh... and the engine is Innodb
edit: Also I'd like to try to avoid deadlocks, unless they really have no effect, in which I could just prepare for them and rerun the query.
SELECT ... FOR UDATE sets exclusive lock on the rows, and if it's not possible it waits for lock to be released, the main aim of SELECT ... FOR UDATE statement is to prevent others from reading the certain rows, while you are manipulating them.
If I get your question right, by 'dirty data' you mean those locked rows?
Don't see why you call them 'dirty', cause they are just locked, but indeed inside of same transaction you can read the rows you've locked (obviuosly).
Regarding your second question
Also is there a better way of guaranteeing that each row can only be
SELECT one time with both scripts running?
SELECT ... FOR UDATE guarantees that in each moment certain rows can be read only inside of one transaction. I dont see a better way to do so, as soon as this statement was specially designed for that purpose.

How can I undo a mysql statement that I just executed?

How can I undo the most recently executed mysql query?
If you define table type as InnoDB, you can use transactions. You will need set AUTOCOMMIT=0, and after you can issue COMMIT or ROLLBACK at the end of query or session to submit or cancel a transaction.
ROLLBACK -- will undo the changes that you have made
You can only do so during a transaction.
BEGIN;
INSERT INTO xxx ...;
DELETE FROM ...;
Then you can either:
COMMIT; -- will confirm your changes
Or
ROLLBACK -- will undo your previous changes
Basically: If you're doing a transaction just do a rollback. Otherwise, you can't "undo" a MySQL query.
For some instrutions, like ALTER TABLE, this is not possible with MySQL, even with transactions (1 and 2).
You can stop a query which is being processed by this
Find the Id of the query process by => show processlist;
Then => kill id;
in case you do not only need to undo your last query (although your question actually only points on that, I know) and therefore if a transaction might not help you out, you need to implement a workaround for this:
copy the original data before commiting your query and write it back on demand based on the unique id that must be the same in both tables; your rollback-table (with the copies of the unchanged data) and your actual table (containing the data that should be "undone" than).
for databases having many tables, one single "rollback-table" containing structured dumps/copies of the original data would be better to use then one for each actual table. it would contain the name of the actual table, the unique id of the row, and in a third field the content in any desired format that represents the data structure and values clearly (e.g. XML). based on the first two fields this third one would be parsed and written back to the actual table. a fourth field with a timestamp would help cleaning up this rollback-table.
since there is no real undo in SQL-dialects despite "rollback" in a transaction (please correct me if I'm wrong - maybe there now is one), this is the only way, I guess, and you have to write the code for it on your own.