I have server that uses transactions to write data to database and if all the queries are successful it will commit otherwise it will rollback. Now I want to have two instances of the server work at the same time on the same database and tables.
When I was reading mysql's transaction documentation I noticed this sentence: "Beginning a transaction causes any pending transaction to be committed". Does this mean that if server A start transaction A and while transaction A is not completed yet, the server B start transaction B, transaction A is forced to commit? This doesn't make sense to me. If this is the case how I can insure that transaction B is not executed until transaction A is completed normally?
Would SET autocommit = 0 be an alternative to this problem?
Assuming when you say you want two instances of the server to work at the same time you mean two separate sessions running on the same server.
The sentence "Beginning a transaction causes any pending transaction to be committed" refers to any pending transaction within the same session only. From http://dev.mysql.com/doc/refman/5.7/en/implicit-commit.html
The statements listed in this section (and any synonyms for them) implicitly end any transaction active in the current session, as if you had done a COMMIT before executing the statement.
So, if Session B starts a transaction before Session A is committed it will not force Session A to commit.
mysql> CREATE TEMPORARY TABLE super(id int);
Query OK, 0 rows affected (0.04 sec)
mysql> START TRANSACTION;
Query OK, 0 rows affected (0.00 sec)
mysql> INSERT INTO super VALUE(1);
Query OK, 1 row affected (0.00 sec)
mysql> START TRANSACTION;
Query OK, 0 rows affected (0.00 sec)
mysql> SELECT * FROM super;
+------+
| id |
+------+
| 1 |
+------+
1 row in set (0.00 sec)
Starting the next transaction implicitly commits the previous pending transaction only in the same session on MySQL.
For example, there is person table with id and name on MySQL as shown below.
person table:
id
name
1
John
2
Tom
First, with only one command prompt(with the same session), you take these steps with the default isolation level REPEATABLE READ on MySQL as shown below:
Flow
Command Prompt 1(CP1)
Explanation
Step 1
mysql -u root -p
CP1 logs in MySQL.
Step 2
USE test
CP1 uses test database.
Step 3
BEGIN;
CP1 starts a transaction.
Step 4
SELECT * FROM person;1 John2 Tom
CP1 reads person table.
Step 5
UPDATE person set name = "Lisa" where id = 2;
CP1 updates Tom to Lisa.
Step 6
BEGIN;
Now, CP1 starts the next transaction so the previous pending transaction is implicitly committed in the same session.
Step 7
ROLLBACK;
CP1 rollbacks.
Step 8
ROLLBACK;
CP1 rollbacks again.
Step 9
SELECT * FROM person;1 John2 Lisa
CP1 reads person table.Lisa is not rollbacked to Tom so starting the next transaction implicitly commits the previous pending transaction in the same session.
Second, with two command prompts(with two different sessions), you take these steps with the default isolation level REPEATABLE READ on MySQL as shown below:
Flow
Command Prompt 1(CP1)
Command Prompt 2(CP2)
Explanation
Step 1
mysql -u root -p
CP1 logs in MySQL.
Step 2
mysql -u root -p
CP2 logs in MySQL.
Step 3
USE test
CP1 uses test database.
Step 4
USE test
CP2 uses test database.
Step 5
BEGIN;
CP1 starts a transaction.
Step 6
SELECT * FROM person;1 John2 Tom
CP1 reads person table.
Step 7
UPDATE person set name = "Lisa" where id = 2;
CP1 updates Tom to Lisa.
Step 8
BEGIN;
Now, CP2 starts the next transaction but the previous pending transaction in CP1's different session is not implicitly committed.
Step 7
ROLLBACK;
CP1 rollbacks.
Step 8
ROLLBACK;
CP2 rollbacks.
Step 9
SELECT * FROM person;1 John2 Tom
CP1 reads person table.Lisa is rollbacked to Tom so starting the next transaction in a different session doesn't implicitly commits the previous pending transaction in a different session.
Related
On MySQL, first, I ran BEGIN; or START TRANSACTION; to start a transaction as shown below:
mysql> BEGIN;
Query OK, 0 rows affected (0.00 sec)
Or:
mysql> START TRANSACTION;
Query OK, 0 rows affected (0.00 sec)
Then, I ran the query below to check if the transaction is running but I got nothing as shown below:
mysql> SELECT trx_id FROM information_schema.innodb_trx;
Empty set (0.00 sec)
So just after this, I ran the query below to show all the rows of person table:
mysql> SELECT * FROM person;
+----+------+
| id | name |
+----+------+
| 1 | John |
| 2 | Tom |
+----+------+
2 rows in set (0.00 sec)
Then again, I ran the query below, then now, I could check the transaction is actually running:
mysql> SELECT trx_id FROM information_schema.innodb_trx;
+-----------------+
| trx_id |
+-----------------+
| 284321631771648 |
+-----------------+
1 row in set (0.00 sec)
So, does only running BEGIN; or START TRANSACTION; query really start a transaction?
Yes. start transaction starts a transaction. Otherwise, that statement would have a very misleading name.
You can check this by trying to do things you cannot do inside a transaction, e.g. change the isolation level:
START TRANSACTION;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
ERROR 1568 (25001): Transaction characteristics can't be changed while a transaction is in progress
So why don't you see it in the innodb_trx table?
Because innodb_trx only lists transactions that involve InnoDB. Until you e.g. query an InnoDB-table, the InnoDB engine doesn't know about that transaction and doesn't list it, just as shown in your tests. Try changing the table to a MyISAM-table. It will not show up in innodb_trx even after you ran a query on it.
A transaction is started on the mysql level, not the storage engine level.
For example, the NDB engine also supports transactions and has its own transaction table where it only shows transactions that involve NDB tables.
I'm confronted by a seemingly simple problem that has turned out to be pretty hard to figure out. We want to keep a record of every time we're presented with a piece of marketing (a lead) so we don't purchase it more than once in a 90 day period. Many lead providers can present us the same lead many times, often concurrently. We want to return an "accept" to exactly one lead provider.
So let's talk about the scenario that works: We have seen the material in the last 90 days and have a record on the table and there are 3 providers presenting the lead concurrently:
select count(id) from recent_leads where
last_seen_at >= '2019-10-11 00:00:00'
and email = 'yes#example.com' for update;
Thread1 arrives first, and acquires the lock. MySQL returns to Thread1:
+-----------+
| count(id) |
+-----------+
| 1 |
+-----------+
1 row in set (0.00 sec)
Thread1 issues a new insert:
insert into recent_leads (email, last_seen_at)
values ('yes#example.com', '2019-12-12 18:23:35');
Thread2 and Thread3 will blocking trying to execute the same statement until Thread1 commits or issues a rollback on it's transaction. Then Thread2 and Thread3 compete for the lock and the same process happens.
So that works as expected and we're happy with it. The wheels come off when there isn't a record.
Thread1, Thread2, and Thread3 all issue the same SQL as above. MySQL now returns this to all three threads immediately, whereas before, only one Thread would proceed:
+-----------+
| count(id) |
+-----------+
| 0 |
+-----------+
1 row in set (0.00 sec)
All three threads now attempt the insert. Two of them will get an error:
ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction
Is there a way we can get MySQL to behave like the first scenario all the time? We want Thread2 and Thread3 to block ideally.
Thank you,
-Jonathan
So we ended up dropping locking. Instead, we commit the row in a separate transaction, then select back all rows for an email minus the last_insert_id(). If we find a row with a lower primary key, we assume another thread is already handling the request. This is nice because it's lock free which is a bit easier to debug.
I seem to not be able extract any helpful information from MySQL to aid my debugging for this error: ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction. Can you help me find some?
Reproduction:
One process does something like this:
start transaction;
update cfgNodes set name="foobar" where ID=29;
and just sits there (not committing, not rolling back). This is clearly the culprit - the process that is hogging the lock because of a long-running transaction - the offender I'm trying to find.
Another process tries:
-- The next line just prevents you from having to wait 50 seconds
set innodb_lock_wait_timeout=1;
update cfgNodes set name="foobar" where ID=29;
This second process gets ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction (after innodb_lock_wait_timeout, default 50secs)
How do I find any information about the culprit?
The standard recommended sources are of little help:
INFORMATION_SCHEMA.INNODB_TRX shows the transaction but not much about it that can help me find it. Only that 1 table is locked (in this bogus tiny example), and that trx_mysql_thread_id is 4093.
mysql> select * from INFORMATION_SCHEMA.INNODB_TRX\G
*************************** 1. row ***************************
trx_id: 280907
trx_state: RUNNING
trx_started: 2018-11-30 00:35:06
trx_requested_lock_id: NULL
trx_wait_started: NULL
trx_weight: 3
trx_mysql_thread_id: 4093
trx_query: NULL
trx_operation_state: NULL
trx_tables_in_use: 0
trx_tables_locked: 1
trx_lock_structs: 2
trx_lock_memory_bytes: 1136
trx_rows_locked: 1
trx_rows_modified: 1
trx_concurrency_tickets: 0
trx_isolation_level: REPEATABLE READ
trx_unique_checks: 1
trx_foreign_key_checks: 1
trx_last_foreign_key_error: NULL
trx_adaptive_hash_latched: 0
trx_adaptive_hash_timeout: 0
trx_is_read_only: 0
trx_autocommit_non_locking: 0
1 row in set (0.00 sec)
INFORMATION_SCHEMA.INNODB_LOCKS is empty, which makes sense given the documentation, because there is only one transaction and currently nobody waiting for any locks. Also INNODB_LOCKS is deprecated anyway.
SHOW ENGINE INNODB STATUS is useless: cfgNodes is not mentioned at all
SHOW FULL PROCESSLIST is empty, because the culprit is not actually running a query right now.
But now remember the trx_mysql_thread_id from before? We can use that to see the queries executed in that transaction:
mysql> SELECT SQL_TEXT
-> FROM performance_schema.events_statements_history ESH,
-> performance_schema.threads T
-> WHERE ESH.THREAD_ID = T.THREAD_ID
-> AND ESH.SQL_TEXT IS NOT NULL
-> AND T.PROCESSLIST_ID = 4093
-> ORDER BY ESH.EVENT_ID LIMIT 10;
+-----------------------------------------------+
| SQL_TEXT |
+-----------------------------------------------+
| select ##version_comment limit 1 |
| start transaction |
| update cfgNodes set name="foobar" where ID=29 |
+-----------------------------------------------+
3 rows in set (0.00 sec)
Voila - we now see a history of the last 10 queries for each of the remaining transactions allowing us to find the culprit.
In my tests I have seen that when using MariaDB, executing the same query in REPETEABLE_READ isolation doesn't produce phantom reads, when it should.
For instance:
I have two rows in the bank_account table:
ID | OWNER | MONEY
------------------------
1 | John | 1000
2 | Louis | 2000
The expected flow should be as shown below:
THREAD 1 (REPETEABLE_READ) THREAD 2 (READ_UNCOMMITED)
| |
findAll()->[1|John|1000,2|Louis|2000] |
| |
| updateAccount(1, +100)
| createAccount("Charles", 3000)
| flush()
| |
| commitTx()
| |_
|
findAll()->[1|John|1000,2|Louis|2000,
| 3|Charles|3000]
|
|
commitTx()
|_
To sum up, after Thread2.createAccount("Charles", 3000); and its flush, Thread1 would search all rows and would get
ID | OWNER | MONEY
------------------------
1 | John | 1000
2 | Louis | 2000
3 | Charles | 3000
Thread1 is protected from uncommited changes seeing [1, John, 1000] instead of [1, John, 1100] but it is supposed to see the new inserted row.
However, what Thread1 retrieves in the second findAll are the exact same results as the ones from the first findAll():
ID | OWNER | MONEY
------------------------
1 | John | 1000
3 | Charles | 3000
It doesn't have phantom reads. Why?????
This is the code executed by Thread1:
#Transactional(readOnly=true, isolation=Isolation.REPEATABLE_READ)
#Override
public Iterable<BankAccount> findAllTwiceRepeteableRead(){
printIsolationLevel();
Iterable<BankAccount> accounts = baDao.findAll();
logger.info("findAllTwiceRepeteableRead() 1 -> {}", accounts);
//PAUSE HERE
...
}
I pause the execution where it sais //PAUSE HERE.
Then Thread2 executes:
bankAccountService.addMoneyReadUncommited(ba.getId(), 200);
bankAccountService.createAccount("Carlos", 3000);
And then Thread1 resumes:
//PAUSE HERE
...
Iterable<BankAccount> accounts = baDao.findAll();
logger.info("findAllTwiceRepeteableRead() 2 -> {}", accounts);
UPDATE:
I've updated the thread transaction flows with what I'm really doing (I am commiting the second transaction after the new row insert).
This matches what, according to wikipedia is a phantom read and I think is the very same scenario. So I still don't get why I'm not getting the phantom read [3|Charles,3000]
A phantom read occurs when, in the course of a transaction, two
identical queries are executed, and the collection of rows returned by
the second query is different from the first.
This can occur when range locks are not acquired on performing a
SELECT ... WHERE operation. The phantom reads anomaly is a special
case of Non-repeatable reads when Transaction 1 repeats a ranged
SELECT ... WHERE query and, between both operations, Transaction 2
creates (i.e. INSERT) new rows (in the target table) which fulfill
that WHERE clause.
Transaction 1 Transaction 2
/* Query 1 */
SELECT * FROM users
WHERE age BETWEEN 10 AND 30;
/* Query 2 */
INSERT INTO users(id,name,age) VALUES ( 3, 'Bob', 27 );
COMMIT;
/* Query 1 */
SELECT * FROM users
WHERE age BETWEEN 10 AND 30;
COMMIT;
What you described as the actual behaviour is in fact the correct behaviour for repeatable_read. The behaviour you are expecting can be achieved by using read_committed.
As mariadb documentation on repeatable_read says (bolding is mine):
there is an important difference from the READ COMMITTED isolation
level: All consistent reads within the same transaction read the
snapshot established by the first read.
In thread 1 the 1st FindAll() call returning John and Louis established the snapshot. The 2nd FindAll() simply used the same snapshot.
This is further corroborated by a Percona blog post on Differences between READ-COMMITTED and REPEATABLE-READ transaction isolation levels:
In REPEATBLE READ, a ‘read view’ ( trx_no does not see trx_id >= ABC,
sees < ABB ) is created at the start of the transaction, and this
read view (consistent snapshot in Oracle terms) is held open for the
duration of the transaction. If you execute a SELECT statement at 5AM,
and come back in an open transaction at 5PM, when you run the same
SELECT, then you will see the exact same resultset that you saw at
5AM. This is called MVCC (multiple version concurrency control) and
it is accomplished using row versioning and UNDO information.
UPDATE
Caveat: The following references are from the MySQL documentation. However, since these references relate to the innodb storage engine, I firmly believe that they apply to mariadb's innodb storage engine as well.
So, in innodb storage engine under repeatable read isolation level, the non-locking selects within the same transaction read from the snapshot established by the first read. No matter how many records were inserted / updated / deleted in concurrent committed transactions, the reads will be consistent. Period.
This is the scenario described by the OP in the question. This would imply that a non-locking read in repeatable read isolation level would not be able to produce a phantom read, right? Well, not exactly.
As MySQL documentation on InnoDB Consistent Nonlocking Reads says:
The snapshot of the database state applies to SELECT statements within
a transaction, not necessarily to DML statements. If you insert or
modify some rows and then commit that transaction, a DELETE or UPDATE
statement issued from another concurrent REPEATABLE READ transaction
could affect those just-committed rows, even though the session could
not query them. If a transaction does update or delete rows committed
by a different transaction, those changes do become visible to the
current transaction. For example, you might encounter a situation like
the following:
SELECT COUNT(c1) FROM t1 WHERE c1 = 'xyz';
-- Returns 0: no rows match. DELETE FROM t1 WHERE c1 = 'xyz';
-- Deletes several rows recently committed by other transaction.
SELECT COUNT(c2) FROM t1 WHERE c2 = 'abc';
-- Returns 0: no rows match. UPDATE t1 SET c2 = 'cba' WHERE c2 = 'abc';
-- Affects 10 rows: another txn just committed 10 rows with 'abc' values.
SELECT COUNT(c2) FROM t1 WHERE c2 = 'cba';
-- Returns 10: this txn can now see the rows it just updated.
To sum up: if you use innodb with repeatable read isolation mode, then phantom reads may occur if data modification statements in concurrent committed transactions interact with data modification statements within the current transaction.
The linked Wikipedia article on isolation levels describes a general theoretical model. You always need to read the actual product manual how a certain feature is implemented because there may be differences.
In the Wikipedia article only locks are described as a mean of preventing the phantom reads. However, innodb uses the creation of the snapshot to prevent the phantom reads in most of the cases, thus there is no need to rely on locks.
These are simple UPDATEs on very small tables in an InnoDB database. On occasion, an operation appears to lock, and doesn't timeout. Then every subsequent UPDATE ends with a timeout. The only recourse right now is to ask my ISP to restart the daemon. Every field in the table is used in queries, so all the fields are indexed, including a primary.
I'm not sure what causes the initial lock, and my ISP doesn't provide enough information to diagnose the problem. They are reticent about giving me access to any settings as well.
In a previous job, I was required to handle similar information, but instead I would do an INSERT. Periodically, I had a script run to DELETE old records from the table, so that not so many records needed to be filtered. When SELECTing I used extrapolation techniques so having more than just the most recent data was useful. This setup was rock solid, it never hung, even under very heavy usage.
I have no problem replacing the UPDATE with an INSERT and periodic DELETEs, but it just seems so clunky. Has anyone encountered a similar problem and fixed it more elegantly?
Current Configuration
max_heap_table_size: 16 MiB
count(*): 4 (not a typo, four records!)
innodb_buffer_pool_size: 1 GiB
Edit: DB is failing now; locations has 5 records. Sample error below.
MySQL query:
UPDATE locations SET x = "43.630181733", y = "-79.882244160", updated = NULL
WHERE uuid = "6a5c7e9d-400f-c098-68bd-0a0c850b9c86";
MySQL error:
Error #1205 - Lock wait timeout exceeded; try restarting transaction
locations
Field Type Null Default
uuid varchar(36) No
x double Yes NULL
y double Yes NULL
updated timestamp No CURRENT_TIMESTAMP
Indexes:
Keyname Type Cardinality Field
PRIMARY PRIMARY 5 uuid
x INDEX 5 x
y INDEX 5 y
updated INDEX 5 updated
It's a known issue with InnoDB, see MySQL rollback with lost connection. I would welcome something like innodb_rollback_on_disconnect as mentioned there. What's happening to you is that you're getting connections disconnected early, as can happened on the web, and if this happens in the middle of a modifying query, the thread doing that will hang but retain a lock on the table.
Right now, accessing InnoDB directly with web services is vulnerable to these kinds of disconnects and there's nothing you can do within FatCow other than ask them to restart the service for you. Your idea to use MyISAM and low priority is okay, and will probably not have this problem, but if you want to go with InnoDB, would recommend an approach like the following.
1) Go with stored procedures, then the transactions are guaranteed to run to completion and not hang in the event of a disconnect. It's a lot of work, but improves reliability big time.
2) Don't rely on auto commit, ideally set it to zero, and explicitly begin and end each transaction with BEGIN TRANSACTION and COMMIT.
transaction1> START TRANSACTION;
transaction1> SELECT * FROM t WHERE i > 20 FOR UPDATE;
+------+
| i |
+------+
| 21 |
| 25 |
| 30 |
+------+
transaction2> START TRANSACTION;
transaction2> INSERT INTO t VALUES(26);
transaction2> COMMIT;
transaction1> select * from t where i > 20 FOR UPDATE;
+------+
| i |
+------+
| 21 |
| 25 |
| 26 |
| 30 |
+------+
What is a gap lock?
A gap lock is a lock on the gap between index records. Thanks to
this gap lock, when you run the same query twice, you get the same
result, regardless other session modifications on that table.
This makes reads consistent and therefore makes the replication
between servers consistent. If you execute SELECT * FROM id > 1000
FOR UPDATE twice, you expect to get the same value twice.
To accomplish that, InnoDB locks all index records found by the
WHERE clause with an exclusive lock and the gaps between them with a
shared gap lock.
This lock doesn’t only affect to SELECT … FOR UPDATE. This is an example with a DELETE statement:
transaction1 > SELECT * FROM t;
+------+
| age |
+------+
| 21 |
| 25 |
| 30 |
+------+
Start a transaction and delete the record 25:
transaction1 > START TRANSACTION;
transaction1 > DELETE FROM t WHERE age=25;
At this point we suppose that only the record 25 is locked. Then, we try to insert another value on the second session:
transaction2 > START TRANSACTION;
transaction2 > INSERT INTO t VALUES(26);
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
transaction2 > INSERT INTO t VALUES(29);
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
transaction2 > INSERT INTO t VALUES(23);
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
transaction2 > INSERT INTO t VALUES(31);
Query OK, 1 row affected (0.00 sec)
After running the delete statement on the first session, not only the affected index record has been locked but also the gap before and after that record with a shared gap lock preventing the insertion of data to other sessions.
If your UPDATE is literally:
UPDATE locations SET updated = NULL;
You are locking all rows in the table. If you abandon the transaction while holding locks on all rows, of course all rows will remain locked. InnoDB is not "unstable" in your environment, it would appear that it is doing exactly what you ask. You need to not abandon the open transaction.