I encounter in my project a need to implement transactions - thing that I never did before. I already checked that my aytocommit is set to 1 - and I'm not sure if I nneed to touch it at all?
Right now I have set of scripts that all does include function that connnects to database first. There is a perfect place to put mysqli_begin_transaction($link);, and mysqli_autocommit($link, FALSE);, so I'd have transactions everywhere regardless if specific script does need it, or not, and turn off autocommit - the documentation on php.net if very poor there but AFAIR I should do this. So my question no. 1 & 2 would be: Is it fine to start transaction everywhere regardless if script does need it or not? And should i disable autocommit like this as well?
Now let's say, that I have such script (sorry for not providing actual code, but my question is about how transaction works, not about code itself):
~insert and/or update things
~do something aka "line 2"
~insert and/or update things again
Seems like example taken right from the book. I obviously want all inserts and updates, or none to happen. Since I already started transaction, I assume, that nothing will commit, unless I call mysqli_commit ($link);. But here we have a little problem: I do not include any 'footer' at the end of my scripts and doing so seems like a nightmare now, so I don't have any place to put commit. So question no. 3 is: Will my queries commit automaticly after script ends (or I call exit; or die();) even if i set autocommit to false? Or do I need to call commit/do not turn off autocommit?
Now comes time for case when something fails and I need to rollback. Same as above - do I need to call mysqli_rollback (mysqli $link);, or pure fact that I did not call commit will be sufficient? I'm refering here to an situation where script does not end normally. Situations like power off server while working on "line 2", or was stopped because it took to much time (set_time_limit stopped it).
This is a somewhat broad question, so I'll try to cover all the things as much as I can.
At first you can ignore the mysqli api (the api specific transaction functions are just wrappers), and go straight to the MySQL manual. The important thing here is that disabling autocommit and starting a transaction are the same thing. Also a single query (including modifications by triggers) is always a transaction.
The answer you question 1 & 2 is "probably not". It very much depends on what your existing code assumes about the database connection, and how your application is structured.
From what you mentioned in the question, the answer would be: it will be better if you only put transactions in the places that need them.
For question 3: it will not commit automatically. You can however make it do so, by using register_shutdown_function, although I don't recommend doing that.
There are statements (implicit commits) which will commit the transaction automatically. These include all DDL statements (CREATE,ALTER...) and also TRUCNATE, LOCK TABLES and others. This basically means those statements can't be used in transactions.
MySQL rolls back transactions when the connection is terminated.
I would recommend to add transactions only to the code which needs them (to be safe you can do this for all code which does more than one write query to the db).
The classic approach is:
START TRANSACTION
query
other things
another query
some other stuff
3-rd query
...
COMMIT
The main thing here is to make sure you only commit if no errors have occurred.
Leave the rollback to either connection termination (or register_shutdown_function if you are using persistent connections), because making sure each and every script will have a correctly working rollback logic is hard :)
This will make sure that nothing is committed if bad things happen (exceptions, fatal errors, time/mem limits, power outages, meteors...).
It is also possible to have transactions at a function/method level (nested and stack-like), but thats out of the scope for this question.
Related
I am having trouble finding an answer to this using google or Stack Overflow, so perhaps people familiar with Percona XtraDB can help answer this. I fully understand how unexpected deadlocks can occur as outlined in this article, and the solution is to make sure you wrap your transactions with retry logic so you can restart them if they fail. We already do that.
https://www.percona.com/blog/2012/08/17/percona-xtradb-cluster-multi-node-writing-and-unexpected-deadlocks/
My questions is about normal updates that occur outside of a transaction in auto commit mode. Normally if you are writing only to a single SQL DB and perform an update, you get a last in wins scenario so whoever executes the statement last, is golden. Any other data is lost so if two updates occur at the same time, one of them will take hold and the others data is essentially lost.
Now what happens in a multi master environment with the same thing? The difference in cluster mode with multi master is that the deadlock can occur at the point where the commit happens as opposed to when the lock is first taken on the table. So in auto commit mode, the data will get written to the DB but then it could fail when it tries to commit that to the other nodes in the cluster if something else modified the exact same record at the same time. Clearly the simply solution is to re-execute the update again and it would seem to me that the database itself should be able to handle this, since it is a single statement in auto commit mode?
So is that what happens in this scenario, or do I need to start wrapping all my update code in retry handling as well and retry it myself when this fails?
Autocommit is still a transaction; a single statement transaction. Your single statement is just wrapped up in BEGIN/COMMIT for you. I believe your logic is inverted. In PXC, the rule is "commit first wins". If you start a manual transaction on node1 (ie: autocommit=0; BEGIN;) and UPDATE id=1 and don't commit then on node2 you autocommit an update to the same row, that will succeed on node2 and succeed on node1. When you commit the manual UPDATE, you will get a deadlock error. This is correct behavior.
It doesn't matter if autocommit or not; whichever commits first wins and the other transaction must re-try. This is the reason why we don't recommend writing to multiple nodes in PXC.
Yes, if you want to write to multiple nodes, you need to adjust your code to "try-catch-retry" handle this error case.
I've got a table that I want to run a pretty long running migration on (~20 mins). During this time the contents of the table should not be changed at all. However, the rails frontend to this table (and many others) will remain up while the migration is running and there is a very real chance that someone will try to modify some data (it's fine if that call ends up throwing an error though).
We use MySQL and allow for 10 connections in our connection pool. Am I right in assuming that it is not enough to wrap this migration in a transaction, but that I would have to lock down the table itself as well?
If you really want to make sure no modifications at all happen to the table, the safest thing is to lock the table on a mysql level.
If, however, you just want to make sure that no competing writes/overwrites happen, you could also use optimistic locking. One thing to mention is, that this could mean, the import script will complain and some saves might fail, because between read and write the front end might have changed the record.
Assuming that would be okay and you could just repeat those individual writes, this is how it would work:
By convention you have to add an integer column called lock_version to the table in question and then you're magically set in the way we love from rails.
There's a bit more to it which I encourage you to read about in the linked documentation and that we can discuss in the comments if you like.
The Setup
While working on some rather complex procedures I've started logging debug information into a _debug table, via a stored logging procedure: P_Log('message'), which just calls a simple INSERT query into the _debug table.
The complex procedures contain transactions, which are rolled back if an error is encountered. The problem is that any debug information that was logged during the course of the transaction is also rolled back. This is of course a little counter productive, since you want to be able to see the debug logs precisely when the procedure -does- fail.
The Question
Is there any way I can insert into _debug without having the inserts rolled back? The log is really only to be used in development, and I would only ever write to it, so I don't care if it would violate how transactions are intended to be used.
And just out of curiosity, how is this normally handled? it seems like being able to write arbitrary log information from inside transactions, to check states of variables, etc, regardless of said transactions being rolled back, would be absolutely crucial for debugging errors. What's the best practice here?
Possible alternatives
storing logs in variables and only writing them at the end of the procedure.
the problem with this is that I want to be able to insert an arbitrary number of debug entries. creating a text variable and parcing that later would work, but seems very hacky.
Using some built-in log in mysql
I'd actually fine with this, if it means I can write arbitrary text to it at will, but I haven't been able to find something like this so far.
The simplest way would be to change your logs table to MyISAM.
It does not support transactions and will completely ignore them. Also MyISAM is a bit faster when you only insert and select from it.
The only other solution that I know of is to create a separate connection for the logs.
For the three SQL types (MySql, SQLite and PostgreSQL) I want / need to handle save points identically.
Now I have my application to change different entries in the database in one big transaction and need some nested transactions for special behavior of the program.
So the question is, if i create something like:
BEGIN TRANSACTION;
--random insert/update statements
SET SAVEPOINT sp1;
--more random inserts/updates
SET SAVEPOINT sp2;
--inserts n stuff
(yes the syntax may not be correct, its just an example)
So i want to know if it is possible to do a rollback between the two save points sp1 and sp2 without rolling back the inserts/updates after sp2?
Savepoints will not do what you want. When you roll back to a savepoint, everything after that savepoint is rolled back, irrespective of whether later savepoints were created.
Think of savepoints like a "stack". You can't pull something out of the middle of the stack, you have to remove everything down to the layer you want.
You are probably looking for autonomous transactions. None of the databases you want to use support them. In PostgreSQL you can work around this using the dblink module to make a new connection to the database and do work with it; see http://www.postgresql.org/docs/current/static/dblink.html . I don't know what solutions MySQL or SQLite offer, but Google will help now that you know the term you are looking for.
I recommend that you find a way to work around this application design requirement if possible. Have your application use two database connections and two transactions to do what you need, taking care of co-ordinating the two as required.
How can i find out if there is transaction open in mySQL? I need to start new one if there is no transaction open, but i don't want to start new one if there is one running, because that would commit that running transaction.
UPDATE:
i need to query database in one method of my application, but that query could be called as part of bigger transaction, or just as it should be a transaction on its own. Changing application to track if it has open a transaction would be more difficult, as it could started from many pieces of code. Although it would be possible, i'm looking for solution that would be faster to implement. Simple if statement in sql would be effortless.
Thank you
I am assuming you are doing this as a one-off and not trying to establish something that can be done programatically. You can get the list of currently active processes using: SHOW PROCESSLIST
http://dev.mysql.com/doc/refman/5.1/en/show-processlist.html
If you want something programatic, then I would suggest an explicit lock on a table, at the beginning of your transaction and release it at the end.