When I send a multiple line semicolon separated query (i.e. 3 separate queries), it works fine, depending on whether I finish it with a COMMIT or ROLLBACK, it either inserts the values or rolls back. BUT when I enter them in three separate queries, one after another, now that's not gonna work. (I'm using PHP MyAdmin)
The latter would have to make more sense, as I think this the whole point in transactions, to send queries in a session (transaction) and deciding only at the end whether we want to run them or discard changes to the table.
START TRANSACTION;
INSERT INTO x VALUES ('y');
COMMIT;
phpMyAdmin doesn't work that way: It doesn't maintain the session between each submission of the form, so you wont get the desired functionality you're looking for that way.
In code, on the other hand, this will work as intended because you're opening up the connection once, running 3 individual queries, and then closing the connection.
Related
I'm not sure if this is an issue with phpMyAdmin, or that I'm not fully understanding how transactions work, but I want to be able to step through a series of queries within a transaction, and either ROLLBACK or COMMIT based on the returned results. I'm using the InnoDB storage engine.
Here's a basic example;
START TRANSACTION;
UPDATE students
SET lastname = "jones"
WHERE studentid = 1;
SELECT * FROM students;
ROLLBACK;
As a single query, this works entirely fine, and if I'm happy with the results, I could re-run the entire query with COMMIT.
However, if all these queries can be ran seperately, why does phpMyAdmin lose the transaction?
For example, if I do this;
START TRANSACTION;
UPDATE students
SET lastname = "jones"
WHERE studentid = 1;
SELECT * FROM students;
Then this;
COMMIT;
SELECT * FROM students;
The update I made in the transaction is lost, and lastname retains its original value, as if the update never took place. I was under the impression that transactions can span multiple queries, and I've seen a couple of examples of this;
1: Entirely possible in Navicat, a different IDE
2: Also possible in PHP via MySQLi
Why then am I losing the transaction in phpMyAdmin, if transactions are able to span multiple individual queries?
Edit 1: After doing a bit of digging, it appears that there are two other ways a transaction can be implicitly ended in MySQL;
Disconnecting a client session will implicitly end the current
transaction. Changes will be rolled back.
Killing a client session will implicitly end the current
transaction. Changes will be rolled back.
Is it possible that phpMyAdmin is ending the client session after Go is hit and a query is submitted?
Edit 2:
Just to confirm this is just a phpMyAdmin-specific issue, I ran the same query across multiple seperate queries in MySQL Workbench, and it worked exactly as intended, retaining the transaction, so it appears to be a failure on phpMyAdmin's part.
Is it possible that phpMyAdmin is ending the client session after Go is hit and a query is submitted?
That is pretty much how PHP works. You send the request, it get's processed, and once done, everything (including MySQL connections) gets thrown away. With next request, you start afresh.
There is a feature called persistent connections, but that is as well doing it's clean up. Otherwise the code would have to somehow handle giving the same user the same connection. Which could prove very difficult given the way PHP works.
I have two tables with related key. I want to choose the best way to delete row from tbl_one and tbl_two rows that have related key. I tried using DELETE JOIN to do this correctly, but I found another way that is very simple that I use two statements of delete. Could you tell me which is better?
First method:
DELETE tbl_one,
tbl_two FROM tbl_one
JOIN tbl_two ON tbl_one.id = tbl_two.tbl_one_id WHERE tbl_one.id = 1
Second method:
DELETE FROM tbl_one WHERE id =1;
DELETE FROM tbl_two WHERE tbl_one_id =1;
The main point of concern the operation should be done in isolation(either both or none)
you should put the operations inside transaction block.
In my perspective first query works better just because the server can reach the savepoint with a single query rather than parsing and executing two.
turn off the foreign_key_check global variable and run the query and turn it on back afterwards.
NB: You can get use of cascading foreign key behavior mysql provides.
It does not matter if you use a single or multiple statements to alter database content, as long as you are using transactions. Without transactions two issues might arise:
another process accessing the data inbetween you running one statement after another queries a state of the database that is "unclean", because only part of the statements has been processed. This may always happen in a system where more than a single client can use the database at the same time, for example in web pages and the like.
a subsequent query might fail, out of whatever reason. In that case only part of your statements have been processed, the other part not. That leaves your database in an "undefined" state again, a persistent situation in this case. You'd have to manually prevent this by error detection, but even then it might simply not be possible to fix the issue.
Relational database management systems offer transactions for this. Transactions allow to "bundle" several statements to a single one from a logical point of view. You start a transaction, run your statements, then close the transaction. If something unexpected occurred you can always "rollback" your transaction, that way you get a stable and clean database situation just like before the start of your transaction.
I am experiencing what appears to be the effects of a race condition in an application I am involved with. The situation is as follows, generally, a page responsible for some heavy application logic is following this format:
Select from test and determine if there are rows already matching a clause.
If a matching row already exists, we terminate here, otherwise we proceed with the application logic
Insert into the test table with values that will match our initial select.
Normally, this works fine and limits the action to a single execution. However, under high load and user-abuse where many requests are intentionally sent simultaneously, MySQL allows many instances of the application logic to run, bypassing the restriction from the select clause.
It seems to actually run something like:
select from test
select from test
select from test
(all of which pass the check)
insert into test
insert into test
insert into test
I believe this is done for efficiency reasons, but it has serious ramifications in the context of my application. I have attempted to use Get_Lock() and Release_Lock() but this does not appear to suffice under high load as the race condition still appears to be present. Transactions are also not a possibility as the application logic is very heavy and all tables involved are not transaction-capable.
To anyone familiar with this behavior, is it possible to turn this type of handling off so that MySQL always processes queries in the order in which they are received? Is there another way to make such queries atomic? Any help with this matter would be appreciated, I can't find much documented about this behavior.
The problem here is that you have, as you surmised, a race condition.
The SELECT and the INSERT need to be one atomic unit.
The way you do this is via transactions. You cannot safely make the SELECT, return to PHP, and assume the SELECT's results will reflect the database state when you make the INSERT.
If well-designed transactions (the correct solution) are as you say not possible - and I still strongly recommend them - you're going to have to make the final INSERT atomically check if its assumptions are still true (such as via an INSERT IF NOT EXISTS, a stored procedure, or catching the INSERT's error in the application). If they aren't, it will abort back to your PHP code, which must start the logic over.
By the way, MySQL likely is executing requests in the order they were received. It's possible with multiple simultaneous connections to receive SELECT A,SELECT B,INSERT A,INSERT B. Thus, the only "solution" would be to only allow one connection at a time - and that will kill your scalability dead.
Personally, I would go about the check another way.
Attempt to insert the row. If it fails, then there was already a row there.
In this manner, you check or a duplicate and insert the new row in a single query, eliminating the possibility of races.
I have more than three MySql queries in a PHP script triggered by scheduled task. If a query catch an error, script throw an exception and rollback that Mysql query. It works fine.
However if first query works fine, but not 2nd query, throw an exception, it rollback 2nd one but not 1st query.
I am using begin_trans(), commit and rollback() for individual queries because Sometimes i need to rollback one query, sometimes all queries. Is there any way to rollback one query or all queries?
Thanks in advance
UPDATE:
I got it working, there was no problem with in begin_trans(), commit and rollback(), the database connection config was different for one query from other queries, crazy code without any comments!!!
The only thing that has to be rolled back is a write operation (INSERT, UPDATE, or DELETE). I'll assume that you're using the word "query" to mean something other than a SELECT operation.
If you want several SQL statements to succeed or fail together, you'll need to specify a transaction.
UPDATE:
Now I'm confused; it's no wonder that you are.
A transaction is an all-or-nothing proposition. It sounds to me like you're confusing two separate use cases: one where you want a single query in a transaction and another where you want several in one transaction. Combining the two is confusing you and, I'm sure, your users.
One you commit a transaction, you can't roll it back. So you'll have to make up your mind: either operation A is part of its own transaction or grouped with B, C, and D in another. But not both.
A rollback is applied to all queries within current transaction.
[edited after question update]
MySQL currently does not support nested transaction so it's all or nothing deal. You can only rollback all queries within a transaction or commit all (successful ones).
The kind of rollback that one gets is completely dependent on how one defines the transaction.
And that is dependent on business use case.
I want to write a procedure that will handle the insert of data into 2 tables. If the insert should fail in either one then the whole procedure should fail. I've tried this many different ways and cannot get it to work. I've purposefully made my second insert fail but the data is inserted into the first table anyway.
I've tried to nest IF statements based on the rowcount but even though the data fails on the second insert, the data is still being inserted into the first table. I'm looking for a total number of 2 affected rows.
Can someone please show me how to handle multiple inserts and rollback if one of them fails? A short example would be nice.
If you are using InnoDB tables (or other compatible engine) you can use the Transaction feature of MySQL that allows you to do exactly what you want.
Basically you start the transaction
do the queries checking for the result
If every result is OK you call the CONMIT
else you call the ROLLBACK to void all the queries within the transaction.
You can read and article about with examples here.
HTH!
You could try turning autocommit off. It might be automatically committing your first insert even though you haven't explicitly committed the transaction that's been started:
SET autocommit = 0;
START TRANSACTION
......