MySQL: Concurrent updates (through threads) on a simple table - mysql

In my application (VC++/Windows 8) I am having 7 threads each have opened connection to MySQL database. All these threads concurrently try to increment value of single field in a table.
To do this I've created a sample table DEMO_TABLE having columns MyIndex and MyCounter (both integers) and added a row to it having MyIndex field value 0. Then I am calling executeUpdate through each thread using MySQL Connector C++ :
executeUpdate("UPDATE DEMO_TABLE SET MyCounter = (MyCounter + 1) WHERE MyIndex = 0");
Here I am not using any locking (row or table lock) still the code didn't give me any error or exception. When I checked the value of MyCounter I saw it got increased. So this seems working correct.
But has raised me these questions:
By default MySQL uses MyISAM engine which needs to lock table for concurrent update query execution. But I am not locking table here, how does this code work without throwing any exception?
Does executeUpdate implicitly uses any lock?
(As per my knowledge InnoDB provides row level locking mechanism which I plan to use in my code. But before that I just wanted to try on my own what happens with default engine without any lock. I was expecting I would get some exception which would tell me about race condition so that I can verify the same doesn't happen with the use of lock)

The locking is implicit, yes, but it's not being done by executeUpdate(). The storage engine in MySQL handles the locking and the unlocking.
Any time you write to a MyISAM table, your query waits the write lock on the table to be available, the write lock is acquired, the write is done, and the write lock is released. There is no genuine write concurrency in MyISAM because each worker is in fact waiting in line for the write lock. You don't get an error because the write requests are serialized.
The situation with InnoDB is similar but very different, in that InnoDB only locks a portion of the table, typically at the row level, where InnoDB can lock a range within an index, thereby locking the rows at that range in the index (and the gap that precedes them). This locking is more granular than table locking, allowing improved concurrency behavior, but there is no concurrent operation on the same row -- each worker waits for the lock or locks that it needs.
In both cases, the locks are implicitly taken.

Related

Can range lock in SQL be acquired in share mode

I have a query such as
Select count(*) from table log where num = ?;
If I set the isolation level to serializable, then the range lock will be acquired for the where clause.
My question is: Can other transaction also acquire the range lock in share mode to read the count as the above OR the range lock is exclusive and all other transactions have to wait until the current transaction commits before executing the above read query.
Background: I am trying to implement a view counter for heavy traffic website. To reduce IO to the database, I create a log table so that every time there is a view, I only write a new row in the log table. Once a while, I (randomly) decide if I want to clear the log table and add the number of rows in the log table into a column of a view count table. This means I have to be careful with interleaving transaction.
The statements below are relevant only to SQL Server and were made before the OP made clear this was really about MySQL, about which I know nothing. I'm leaving it here since it (and the resulting discussion) might be of some use nevertheless, but it is not a complete, relevant answer to the question.
SELECT statements only ever acquire shared locks, on all isolation levels (unless overridden with a table hint). And shared locks are always compatible with each other (see Lock Compatibility), so there's no problem if other transactions want to acquire shared (range) locks as well. So yes, you can have any number of queries performing SELECT COUNT(*) in parallel and they will never block each other.
This doesn't mean other transactions don't have to wait. In particular, a DELETE query must eventually acquire an exclusive lock, and it will have to wait if the SELECT is holding a shared lock. Normally this is not an issue since the engine releases locks as soon as possible. When it does become an issue, you'll want to look at solutions like snapshot isolation, which uses optimistic concurrency and conflict detection rather than locking. Under that model, a SELECT will never block any other query (save those that want table locks). Of course, this isn't free; the row versioning is uses takes up disk space and I/O.

MySQL "LOCK TABLES" timeout?

What's the timeout for mysql LOCK TABLES statement?
Can't find it anywhere.
I tried to set variable innodb_lock_wait_timeout ini my.cnf but it seems it's related to another (row level) locking not to table locking.
Simply it has no effect for LOCK TABLES.
I want to set some low timeout value for case of deadlock, because if some operation will LOCK tables and something will go wrong, it will hang up the whole site!
Which is stupid for example in case of finishing purchase on your site.
My work-around is to create a dedicated lock table and just lock a row in that table. This has the advantage of only locking the processes that specifically want to be locked. Other parts of the application can continue to access the tables even if they are at some point touched by the update processes.
Setup
CREATE TABLE `mutex` (
EMPTY ENUM('') NOT NULL,
PRIMARY KEY (EMPTY)
);
Usage
set innodb_lock_wait_timeout = 1;
start transaction;
insert into `mutex` values();
[... do the real work here ... or somewhere else ... even a different machine ...]
delete from `mutex`;
commit;
Why are you using LOCK TABLES?
If you are using MyISAM (which sometimes needs LOCK TABLES), you should convert to InnoDB.
If you are using InnoDB, you should never use LOCK TABLES. Instead, depend on innodb_lock_wait_timeout (default is an unreasonably high 50 seconds). And you should check for errors.
InnoDB Deadlocks are caught and immediately cause an error. Certain non-deadlocks may wait for innodb_lock_wait_timeout.
Edit
Since the transaction looks like
BEGIN;
SELECT ...;
compute some stuff
UPDATE ... (using that stuff);
COMMIT;
You need to add FOR UPDATE on the end of the SELECT.
I think you are after the table_lock_timout variable which was introduced in MySQL 5.0.10 but subsequently removed in 5.5. Unfortunately, the release notes don't specify an alternative to use, and I'm guessing that the general attitude is to switch over to using InnoDB transactions as #Rick James has stated in his answer.
I think that removing the variable was unhelpful. Others may regard this as a case of the XY Problem, where we are trying to fix a symptom (deadlocks) by changing the timeout period of locking tables when really we should resolve the root cause by switching over to transactions instead. I think there may still be cases where table locks are more suitable to the application than using transactions and are perhaps a lot easier to comprehend, even if they are worse performing.
The nice thing about using LOCK TABLES, is that you can state the tables that you're queries are dependent upon before proceeding. With transactions, the locks are grabbed at the last possible moment and if they can't be fetched and time-out, you then need to check for this failure and roll back before trying everything all over again. It's simpler to have a 1 second timeout (minimum) on the lock tables query and keep retrying to get the lock(s) until you succeed and then proceeding with your queries before unlocking the tables. This logic is at no risk of deadlocks.
I believe the developer's attitude is summed up by the following excerpt from the documetation:
...avoid using the LOCK TABLES statement, because it does not offer
any extra protection, but instead reduces concurrency.
The correct answer is the lock_wait_timeout system variable.
From the documentation:
This variable specifies the timeout in seconds for attempts to acquire
metadata locks. The permissible values range from 1 to 31536000 (1
year). The default is 31536000.
This timeout applies to all statements that use metadata locks. These
include DML and DDL operations on tables, views, stored procedures,
and stored functions, as well as LOCK TABLES, FLUSH TABLES WITH READ
LOCK, and HANDLER statements.
I think you meant to say the default timeout value; which is 50 Seconds per MySQL Documentation it says
innodb_lock_wait_timeout Default 50 The timeout in seconds an
InnoDB transaction may wait for a row lock before giving up. The
default value is 50 seconds

MySQL MyISAM how to perform a read without locking a table?

My question is a follow up to this answer. I want to find out how to perform a select statement without locking a table with MyISAM engine.
The answer states the following if you have InnoDB but not MyISAM. What is the equivalent for MyISAM engine?
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED ;
SELECT * FROM TABLE_NAME ;
COMMIT ;
This is the default behaviour with MyISAM tables. If one actually wants to lock a MyISAM table, one must manually acquire a table-level lock. Transaction isolation level, START TRANSACTION, COMMIT, ROLLBACK have no effect on MyISAM tables behaviour since MyISAM does not support transactions.
More about internal locking mechanisms
A READ lock is implicitely acquired before, and released after execution of a SELECT statement. Notice that several concurrent, simultaneous, SELECT statements could be running at the same time, because several sessions may hold a READ lock on the same table.
Conversely, a WRITE lock is implicitely acquired before executing an INSERT or UPDATE or DELETE statement. This means that no read (let alone a concurrent write) can take place as long as a write is in progress*.
The above applies to MyISAM, MEMORY, and MERGE tables only.
You might want to read more about this here:
Internal locking methods
Read vs Write locks
* However, these locks are not always required thanks to this clever trick:
The MyISAM storage engine supports concurrent inserts to reduce contention between readers and writers for a given table: If a MyISAM table has no free blocks in the middle of the data file, rows are always inserted at the end of the data file. In this case, you can freely mix concurrent INSERT and SELECT statements for a MyISAM table without locks.
MyISAM does indeed use a read lock during SELECT. An INSERT at the end of the table can get around that.
But try doing an UPDATE, DELETE, or ALTER TABLE while a long-running SELECT is in progress. Or vice-versa, reading from a table while a change to that table is running. It's first-come, first-serve, and the later thread blocks until the first thread is done.
MyISAM doesn't have any support for transactions, so it must work this way. If a SELECT were reading rows from a table, and a concurrent thread changes some of those rows, you would get a race condition. The SELECT may read some of the rows before the change, and some of the rows after the change, resulting in a completely mixed-up view of the data.
Anything you do with SET TRANSACTION ISOLATION LEVEL has no effect with MyISAM.
For these reasons, it's recommended to use InnoDB instead.

MYSQL - Locking - InnoDB

I am using mysql with InnoDB databases.
If all my transactions are Inserts and Selects (no updates), I assume I would not have to worry about SQL deadlocking.
I can't see a scenario where deadlocking would occur. Am I correct to assume deadlocking cannot occur if I only do Inserts and Selects?
May not be relevant but everything transaction is done with PDO
No. You still have to worry about SQL deadlocking.
You can get deadlocks even in the case of a transaction that inserts a single row. This is because the insert operation is not really atomic and locks are set automatically on the (possibly several) index records of the inserted row.
InnoDB MySQL storage engine has row level locks while the MyISAM MySQL storage engine has table level locks. MyISAM simply locks entire tables, and doesn't support transactions, so it's not possible to have database-level deadlocks. Note that an app can lock up another app by sitting on a table lock on the table they are both trying to access, but this is a code error, not a db-level "deadlock".
InnoDB supports transactions and has row-level locks, so db-level deadlocks are possible (and can happen occasionally in a busy system so you do need to code around them). Many of what MySQL will call "deadlocks" aren't "true deadlocks" as much as they're the result of slow UPDATEs causing other queries to time out on row locks.

Select statement blocks the read/write operation on the InnoDB table

I have a Select query which executes on a transactional table having more than 4 million records. Whenever I execute this query , I observe that all write and update operations on that particular transactional table become suspended and we start getting exceptions from java side that lock wait timeout exceeds , try restarting transaction. Currently lock wait timeout is set to 200 seconds. I am unable to understand that why a select statement can create such locks on the table and block all insert/update statements. The table storage engine is InnoDb and primary key is auto-increment key. The MySQL Version is 5.1.40.
Also I m not starting any transaction before executing select statement.
Any Idea?
So, yes, your SELECT in one transaction read-locks the records of that table and write operations which touch the same records will have to wait until read transaction completes (if it follows two phase locking).
This document may help understanding innodb locks model