can someone explain the need to lock tables and/or rows in mysql?
I am assuming that it to prevent multiple writes to the same field, is this the best practise?
First lets look a good document This is not a mysql related documentation, it's about postgreSQl, but it's one of the simplier and clear doc I've read on transaction. You'll understand MySQl transaction better after reading this link http://www.postgresql.org/docs/8.4/static/mvcc.html
When you're running a transaction 4 rules are applied (ACID):
Atomicity : all or nothing (rollback)
Coherence : coherent before, coherent after
Isolation: not impacted by others?
Durability : commit, if it's done, it's really done
In theses rules there's only one which is problematic, it's Isolation. using a transaction does not ensure a perfect isolation level. The previous link will explain you better what are the phantom-reads and suchs isolation problems between concurrent transactions. But to make it simple you should really use Row levels locks to prevent other transaction, running in the same time as you (and maybe comitting before you), to alter the same records. But with locks comes deadlocks...
Then when you'll try using nice transactions with locks you'll need to handle deadlocks and you'll need to handle the fact that transaction can fail and should be re-launched (simple for or while loops).
Edit:------------
Recent versions of InnoDb provides greater levels of isolation than previous ones. I've done some tests and I must admit that even the phantoms reads that should happen are now difficult to reproduce.
MySQL is on level 3 by default of the 4 levels of isolation explained in the PosgtreSQL document (where postgreSQL is in level 2 by default). This is REPEATABLE READS. That means you won't have Dirty reads and you won't have Non-repeatable reads. So someone modifying a row on which you made your select in your transaction will get an implicit LOCK (like if you had perform a select for update).
Warning: If you work with an older version of MySQL like 5.0 you're maybe in level 2, you'll need to perform the row lock using the 'FOR UPDATE' words!
We can always find some nice race conditions, working with aggregate queries it could be safer to be in the 4th level of isolation (by using LOCK IN SHARE MODE at the end of your query) if you do not want people adding rows while you're performing some tasks. I've been able to reproduce one serializable level problem but I won't explain here the complex example, really tricky race conditions.
There is a very nice example of race conditions that even serializable level cannot fix here : http://www.postgresql.org/docs/8.4/static/transaction-iso.html#MVCC-SERIALIZABILITY
When working with transactions the more important things are:
data used in your transaction must always be read INSIDE the transaction (re-read it if you had data from before the BEGIN)
understand why the high isolation level set implicit locks and may block some other queries ( and make them timeout)
try to avoid dead locks (try to lock tables in the same order) but handle them (retry a transaction aborted by MySQL)
try to freeze important source tables with serialization isolation level (LOCK IN SHARE MODE) when your application code assume that no insert or update should modify the dataset he's using (if not you will not have problems but your result will have ignored the concurrent changes)
It is not a best practice. Modern versions of MySQL support transactions with well defined semantics. Use transactions, and forget about locking stuff by hand.
The only new thing you'll have to deal with is that transaction commits may fail because of race conditions, but you'd be doing error checking with locks anyway, and it is easier to retry the logic that led to a transaction failure than to recover from errors in a non-transactional setup.
If you do get race conditions and failed commits, then you may want to fine-tune the isolation configuration for your transactions.
For example if you need to generate invoice numbers which are sequential and have no numbers missing - this is a requirement at least in the country I live in.
If you have a few web servers, then a few users might be buying stuff literally at the same time.
If you do select max(invoice_id)+1 from invoice to get the new invoice number, two web servers might do that at the same time (before the new invoice has been added), and get the same invoice number for the invoices they're trying to create.
If you use a mechanism such as "auto_increment", this is just meant to generate unique values, and makes no guarantees about not missing out numbers (if one transaction tries to insert a row, then does a rollback, the number is "lost"),
So the solution is to (a) lock the table (b) select max(invoice_id)+1 from invoice (c) do the insert (d) commit + unlock the table.
On another note, in MySQL you're best using InnoDB and using row-level locking. Doing a lock table command can implicitly commit the transaciton you're working on.
Take a look here for general introduction to what transactions are and how to use them.
Databases are designed to work in concurrent environments, so locking the tables and/or records helps to keep the transactions consistent.
So a record affected by one transaction should not be altered until this transaction commits or rolls back.
Related
I am not able to get a clear complete understanding regarding the role of transactions in databases.
I know operations clubbed in a transactions will be executed together and then either committed or rolled back.
But then what about about any other query that I write to the database without manually creating a transaction.
Is a transaction created internally for them?
Also what about select statements then? Are transactions created for them too?
I have been using database and sql for some time now, alas I am not clear on these
Are changes to DBs happening only through transactions? Short answer is yes.
There is always a transaction involved:
It might be automatically started (before) and commited (after) every single DML statement you issue, if you're relying on AUTOCOMMIT behaviour of your database session
Or you may explictly start one with BEGIN, execute your statements and end it with COMMIT
I like to think a transaction as a boundary that imposes clear semantics of ATOMICITY and ISOLATION to the statements that are contained within.
You describe atomicity (all or nothing behaviour) but that is not the only guarantee that a transaction can give you: there's also isolation (and this has to do with reads you within transactions (E.g. SELECTs).
In a concurrent application (many clients writing and reading to the same db/table at the same time), transaction ISOLATION is the property that defines "how much of the effects of other operations" can be observed in the current one. For example, assume you need to perform a transaction that involves doing the same SELECT multiple times: do you want this SELECT to return (possibly) different results each time (because some modification happened concurrently) or not?
For single statements is:
A single DML (UPDATE, INSERT...) statement by itself is effectively "As if it was in a transaction with a single statement, that gets immediately commited after execution" (Either it works like this because you're in AUTOCOMMIT, or you wrapped a single statement within BEGIN...COMMIT)
For a single SELECT it's the same. The transaction in this case (implicit or not, gives you the possibility of specifiying different isolation levels). It might sound strange to consider transactions for SELECTS, but requiring particular isolation levels might mean that the db is acquiring some lock to the data under the hood: committing the transaction in that case would release such lock.
Since you tagged mysql, here you can read on transaction isolations supported by mysql:
https://dev.mysql.com/doc/refman/5.7/en/innodb-transaction-isolation-levels.html
A SQL transaction is any statement that contains Data Manipulation Language (DML). That is, any statement that changes values in a table, such as UPDATE, INSERT, MERGE, DELETE, etc.
What is the exact difference between the two locking read clauses:
SELECT ... FOR UPDATE
and
SELECT ... LOCK IN SHARE MODE
And why would you need to use one over the other?
I have been trying to understand the difference between the two. I'll document what I have found in hopes it'll be useful to the next person.
Both LOCK IN SHARE MODE and FOR UPDATE ensure no other transaction can update the rows that are selected. The difference between the two is in how they treat locks while reading data.
LOCK IN SHARE MODE does not prevent another transaction from reading the same row that was locked.
FOR UPDATE prevents other locking reads of the same row (non-locking reads can still read that row; LOCK IN SHARE MODE and FOR UPDATE are locking reads).
This matters in cases like updating counters, where you read value in 1 statement and update the value in another. Here using LOCK IN SHARE MODE will allow 2 transactions to read the same initial value. So if the counter was incremented by 1 by both transactions, the ending count might increase only by 1 - since both transactions initially read the same value.
Using FOR UPDATE would have locked the 2nd transaction from reading the value till the first one is done. This will ensure the counter is incremented by 2.
For Update --- You're informing Mysql that the selected rows can be updated in the next steps(before the end of this transaction) ,,so that mysql does'nt grant any read locks on the same set of rows to any other transaction at that moment. The other transaction(whether for read/write )should wait until the first transaction is finished.
For Share- Indicates to Mysql that you're selecting the rows from the table only for reading purpose and not to modify before the end of transaction. Any number of transactions can access read lock on the rows.
Note: There are chances of getting a deadlock if this statement( For update, For share) is not properly used.
Either way the integrity of your data will be guaranteed, it's just a question of how the database guarantees it. Does it do so by raising runtime errors when transactions conflict with each other (i.e. FOR SHARE), or does it do so by serializing any transactions that would conflict with each other (i.e. FOR UPDATE)?
FOR SHARE (a.k.a. LOCK IN SHARE MODE): Transactions face a higher probability of failure due to deadlock, because they delay blocking until the moment an update statement is received (at which point they either block until all readlocks are released, or fail due to deadlock if another write is in progress). However, only one client blocks and eventually succeeds: the other clients will fail with deadlock if they try to update, so only one of them will succeed and the rest will have to retry their transactions.
FOR UPDATE: Transactions won't fail due to deadlock, because they won't be allowed to run concurrently. This may be desirable for example because it makes it easier to reason about multi-threading if all updates are serialized across all clients. However, it limits the concurrency you can achieve because all other transactions block until the first transaction is finished.
Pro-Tip: As an exercise I recommend taking some time to play with a local test database and a couple mysql clients on the command line to prove this behavior for yourself. That is how I eventually understood the difference myself, because it can be very abstract until you see it in action.
I was just wondering how most relational databases handled maintaining your set of results if another query has edited those rows that you were working on. For instance if I do a select of like 100k rows and while I am still fetching those another query comes in and does an update on 1 of the rows that hasn't been read yet the update is not going to be seen in the fetching of those rows and I was wondering how the database engine handles that. If you only have specifics for one type of database thats fine I would like to hear it anyway.
Please lookup Multi Version Concurrency Control. Different databases have different approaches to managing this. For MySQL, InnoDB, you can try http://dev.mysql.com/doc/refman/5.0/en/innodb-multi-versioning.html. PostgreSQL - https://wiki.postgresql.org/wiki/MVCC. A great presentation here - http://momjian.us/main/writings/pgsql/mvcc.pdf. It is explained in stackoverflow in this thread Database: What is Multiversion Concurrency Control (MVCC) and who supports it?
The general goal you are describing in concurrent programming (Wikipedia concurrency control) is serialization (Wikipedia serializability): an implementation manages the database as if transactions occurred without overlap in some order.
The importance of that is that only then does the system act in a way described by the code as we normally interpret it. Otherwise the results of operations are a combination of all processes acting concurrently. Nevertheless by having limited categories of non-normal non-isolated so-called anomalous behaviours arise transaction throughput can be increased. So those implementation techniques are also apropos. (Eg MVCC.) But understand: such non-serialized behaviour is not isolating one transaction from another. (Ie so-called "isolation" levels are actually non-isolation levels.)
Isolation is managed by breaking transaction implementations into pieces based on reading and writing shared resources and executing them interlaced with pieces from other transactions in such a way that the effect is the same as some sequence of non-overlapped execution. Roughly speaking, one can "pessimistically" "lock" out other processes from changed resources and have them wait or "optimistically" "version" the database and "roll back" (throw away) some processes' work when changes are unreconcilable (unserializable).
Some techniques based on an understanding of serializability by an implementer for a major product are in this answer. For relevant notions and techniques, see the Wikipedia articles or a database textbook. Eg Fundamentals of database systems by Ramez Elmasri & Shamkant B. Navathe. (Many textbooks, slides and courses are free online.)
(Two answers and a comment to your question mention MVCC. As I commented, not only is MVCC just one implementation technique, it doesn't even support transaction serialization, ie actually isolating transactions as if each was done all at once. It allows certain kinds of errors (aka anomalies). It must be mixed with other techniques for isolation. The MVCC answers, comments and upvoting reflects a confusion between a popular and valuable technique for a useful and limited failure to isolate per your question vs the actual core issues and means.)
As Jayadevan notes, the general principle used by most widely used databases that permit you to modify values while they're being read is multi-version concurrency control or MVCC. All widely used modern RDBMS implementations that support reading rows that're being updated rely on some concept of row versioning.
The details differ between implementations. (I'll explain PostgreSQL's a little more here, but you should accept Jayadevan's answer not mine).
PostgreSQL uses transaction ID ranges to track row visibility. So there'll be multiple copies of a tuple in a table, but any given transaction can only "see" one of them. Each transaction has a unique ID, with newer transactions having newer IDs. Each tuple has hidden xmin and xmax fields that track which transactions can "see" the tuple. Insertion is implimented by setting the tuple's xmin so that transactions with lower xids know to ignore the tuple when reading the table. Deletion is implimented by setting the tuple's xmax so that transactions with higher xids know to ignore the tuple when reading the table. Updates are implemented by effectively deleting the old tuple (setting xmax) then inserting a new one (setting xmin) so that old transactions still see the old tuple, but new transactions see the new tuple. When no running transaction can still see a deleted tuple, vacuum marks it as free space to be overwritten.
Oracle uses undo and redo logs, so there's only one copy of the tuple in the main table, and transactions that want to find old versions have to go and find it in the logs. Like PostgreSQL it uses row versioning, though I'm less sure of the details.
Pretty much every other DB uses a similar approach these days. Those that used to rely on locking, like older MS-SQL versions, have moved to MVCC.
MySQL uses MVCC with InnoDB tables, which are now the default. MyISAM tables still rely on table locking (but they'll also eat your data, so don't use them for anything you care about).
A few embedded DBs, like SQLite, still rely only on table locking - which tends to require less wasted disk space and I/O overhead, at the cost of greatly reduced concurrency. Some databases let you bypass MVCC if you take an exclusive lock on a table.
(Marked community wiki, since I also close-voted this question).
You should also read the PostgreSQL docs on transaction isolation and locking, and similar documentation for other DBs you use. See the Wikipedia article on isolation.
Snapshot isolation solves the problem you are describing. If you use locking, you can see the same record twice as the iterator through the database, as the unlocked records change underneath your feet as you're doing the scan.
Read committed isolation level with locking suffers from this problem.
Depending on the granularity of the lock, the WHERE predicate may lock matching pages and tuples for locking so that the running read query doesn't see phantom data appearing (phantom reads)
I implemented multiversion concurrency control in my Java project. A transaction is given a monotonically increasing timestamp which starts at 0 and goes up by 1 each time the transaction is aborted. Later transactions have higher timestamps. When a transaction goes to read, it can only see data that has a timestamp that is less than or equal to itself and is committed for that key (or column of that tuple). (Equal to so it can see its own writes)
When a transaction writes, it updates the committed timestamp for that key to that of that transactions timestamp.
I'm starting out with MySQL trnsactions and I have a doubt:
In the documentation it says:
Beginning a transaction causes any pending transaction to be
committed. See Section 13.3.3, “Statements That Cause an Implicit
Commit”, for more information.
I have more or less 5 users on the same web application ( It is a local application for testing ) and all of them share the same MySQL user to interact with the database.
My question is: If I use transactions in the code and two of them start a transaction ( because of inserting, updating or something ) Could it be that the transactions interfere with each other?
I see in the statements that cause an implicit commit Includes starting a transaction. Being a local application It's fast and hard to tell if there is something wrong going on there, every query turns out as expected but I still have the doubt.
The implicit commit occurs within a session.
So for instance you start a transaction, do some updates and then forget to close the transaction and start a new one. Then the first transaction will implicitely committed.
However, other connections to the database will not be affected by that; they have their own transactions.
You say that 5 users use the same db user. That is okay. But in order to have them perform separate operations they should not use the same connection/session.
With MySQl by default each connection has autocommit turned on. That is, each connection will commit each query immediately. For an InnoDb table each transaction is therefore atomic - it completes entirely and without interference.
For updates that require several operations you can use a transaction by using a START TRANSACTION query. Any outstanding transactions will be committed, but this won't be a problem because mostly they will have been committed anyway.
All the updates performed until a COMMIT query is received are guaranteed to be completed entirely and without interference or, in the case of a ROLLBACK, none are applied.
Other transations from other connections see a consistent view of the database while this is going on.
This property is ACID compliance (Atomicity, Consistency, Isolation, Durability) You should be fine with an InnoDB table.
Other table types may implement different levels of ACID compliance. If you have a need to use one you should check it carefully.
This is a much simplified veiw of transaction handling. There is more detail on the MySQL web site here and you can read about ACID compliance here
I have a field on a User table that holds the account balance for the user. Users can perform a lot of actions with my service that will result in rapid changes to their balance.
I'm trying to use mysql's serializable isolation level to make sure that multiple user actions will not update the value incorrectly. (Action A and action B simultaneously want to deduct 1 dollar from the balance.) However, I'm getting a lot of deadlock errors.
How do I do this correctly without getting all these deadlocks, and still keeping the balance field up to date?
simple schema: user has an id and a balance.
im using doctrine, so i'm doing something like the following:
$con->beginTransaction();
$tx = $con->transaction;
$tx->setIsolation('SERIALIZABLE');
$user = UserTable::getInstance()->find($userId);
$user->setBalance($user->getBalance() + $change);
$user->save();
$con->commit();
First trying to use serializable isolation level on your transaction is a good idea. It means you know at least a minimum what a transation is, and that the isolation level is one of the biggest problem.
Note that serializable is not really a true seriability. More on that on this previous answer, when you'll have some time to read it :-).
But the most important part is that you should consider that having automatic rollbacks on your transaction because of failed serialibility is a normal fact, and that the right thing to do is building your application so that transactions could fail and should be replayed.
One simple solution, and for accounting things I like this simple solution as we can predict all the facts, no suprises, so, one solution is to perform table locks. This is not a fine and elegant solution, no row levels locks, just simple big table locks (and always in the same order). After that you can do your operation as a single player and then release teh locks. Not multi user concurrency on the rows of the tables, no next-row magical locks fails (see previous link). This will certainly slow down your write operations, but if everybody performs the table locks in the same order you'll only get locks timeouts problems, no deadlocks and no 'unserializable auto-rollback'.
Edit
From your code sample I'm not sure you can set the transaction isolation level after the begin. You should activate query logs on MySQL and seewhat is done, then check that other transactions runned by the CMS are not still in the serializable level.