I am recently reading the InnoDB code and need to write some code on it.
I know in S2PL, a blocked transaction will be resumed after the conflicted transaction finishes. However, I am not sure how InnoDB resumes transactions after a block, is there a thread which handle this kind of work? Thanks a lot.
When a lock is needed but can't be granted, the lock is entered into a lock queue at the page level. When any lock is released, the releasing transaction searches the queue and grants the next newly non-conflicting locks from the queue. See lock_rec_enqueue_waiting, lock_rec_dequeue_from_page, lock_grant, etc. from storage/innobase/lock/lock0lock.c in the MySQL source code.
Related
I have a question to ask friend about MySQL.
Whether the redo log buffer is persisted to disks more slowly than the buffer pool?
When the transaction is not committed, the system gose down, Is it possible that the events in the redo log buffer are not persisted yet? but the dirty pages in the buffer poll are already persisted to disk.
I did not find the relevant documentation, please forgive me for being a novice.
Thanks.
InnoDB is designed to favor COMMIT over ROLLBACK. That is, when a transaction finishes, there is very little more to do if you are COMMITting. But a lot to do if you are ROLLBACKing.
Rollback must undo all the inserts/deletes/updates that were optimistically performed and essentially completed.
For COMMIT, the redo log is eventually thrown away. For ROLLBACK it is read and actions are taken based on what is in the log.
Also, note that an UPDATE or DELETE of a million rows generates a lot of redo log entries, hence will take a really long time to undo. Perhaps we should discuss what query you were ROLLBACKing. There may be a more efficient way to design the data flow.
Another thing to note is that all changes to the data and indexes happen in the buffer_pool. If the big query needed to change so much that it overflowed the buffer_pool, those dumped blocks will need to be reloaded to undo.
I want to use mysql row level lock. I can't lock complete table. I want to avoid two process processing two different message for server at same time.
What I thought that I can have some table called:
server_lock and if one process start working on server it will insert a row in the table.
Problem with this approach is that if application crashes. We need to remove the lock manually.
Is there a way I may row level lock and lock will get released if application is crashing ?
Edit
I am using C++ as language.
My application is similar to message queue. But difference is that there is two queue which are getting populated by one process for each queue. After action if action belong to same object and both are processing same object it may result in wrong data. So I want a locking mechanism b/w these two queue so that both processor don't modify same object at same time.
I can think of two ways:
Implement some error handler on your program where you remove the lock. Without knowing anything about your program it is hard to say how to do this, but most languages have some method to do some work before exiting upon a crash. This is dangerous, because a crash happens when something is not right. If you continue to do any work, it is possible that you corrupt the database or something like that.
Periodically update the lock. Add a thread on your program that periodically reacquires the lock, or reacquire the lock in some loop you are doing. Then, when a lock is not updated in a while, you know that it belonged to a program that crashed.
According to the MySQL documentation, if any session holds a Read lock for a table, then another session requests a Write lock for the same table, the Write lock must be given and the Read lock waits.
I tried it, connected to the MySQL server from two consoles (Windows 7). Locked the table A from the first console (read lock), then tried to lock the same table from the second console (write lock), but the second console just waits till the first lock releases.
Who is wrong: me or the documentation? (MySQL Server version 5.5.27)
The citation from the MySQL official documentation:
"WRITE locks normally have higher priority than READ locks to ensure
that updates are processed as soon as possible. This means that if one
session obtains a READ lock and then another session requests a WRITE
lock, subsequent READ lock requests wait until the session that
requested the WRITE lock has obtained the lock and released it."
It's written right there:
This means that if one session obtains a READ lock and then another session requests a WRITE lock, subsequent READ lock requests wait until the session that requested the WRITE lock has obtained the lock and released it.
The READ locks that were already obtained won't be broken mid-operation. That would cause havoc. It's the subsequent requests that get to wait.
The key word is subsequent in subsequent READ lock requests. I think this is saying existing READ locks will not be paused, but instead, READ locks that occur while a WRITE lock is in effect will be deferred. So I think the docs are right.
As per the documentation link given below:
When a lock wait timeout occurs, the current statement is not executed. The current transaction is not rolled back. (Until MySQL 5.0.13 InnoDB rolled back the entire transaction if a lock wait timeout happened. You can restore this behavior by starting the server with the --innodb_rollback_on_timeout option, available as of MySQL 5.0.32.
http://dev.mysql.com/doc/refman/5.0/en/innodb-parameters.html#sysvar_innodb_lock_wait_timeout
Does it mean that when a lock wait timeout occurs, it compromises the transactional integrity?
"roollback on timeout" was the default behaviour till 5.0.13 and I guess that was the correct way to handle such situations. Does anyone think that this should be the default behaviour and the user should not be asked to add a parameter for a functionality that is taken for granted?
It does not compromise referential integrity - it just gives you a chance to either retry, or do something else like commit work completed so far, or rollback.
For small transactions, and for simplicity, you might as well switch on the rollback-on-timeout option. However, if you are running transactions over many hours, you might appreciate the chance to react to a timeout.
i am using a toplink with struts 2 and toplink for a high usage app, the app always access a single table with multiple read and writes per second. This causes a lock_wait_timeout error and the transaction rolls back, causing the data just entered to disappear from the front end. (Mysql's autocommit has been set to one). The exception has been caught and sent to an error page in the app but still a rollback occurs (it has to be a toplink exception as mysql does not have the rollback feature turned on). The raw data files, ibdata01 show the entry in it when opened in an editor. As this happend infreqeuntly have not been able to replicate in test conditions.
Can anyone be kind enough to provide some sort of way out of this dilemma? What sort of approach should such a high access (constant read and writes from the same table all the time)? Any help would be greatly appreciated.
What is the nature of your concurrent reads/updates? Are you updating the same rows constantly from different sessions? What do you expect to happen when two sessions update the same row at the same time?
If it is just reads conflicting with updates, consider reducing your transaction isolation on your database.
If you have multiple write conflicting, then you may consider using pessimistic locking to ensure each transaction succeeds. But either way, you will have lot of contention, so may reconsider your data model or application's usage of the data.
See,
http://en.wikibooks.org/wiki/Java_Persistence/Locking
lock_wait_timeouts are a fact of life for transactional databases. the normal response should usually be to trap the error and attempt to re-run the transaction. not many developers seem to understand this, so it bears repeating: if you get a lock_wait_timeout error and you still want to commit the transaction, then run it again.
other things to look out for are:
persistent connections and not
explicitly COMMIT'ing your
transactions leads to long-running
transactions that result in
unnecessary locks.
since you
have auto-commit off, if you log in
from the mysql CLI (or any other
interactive query tool) and start
running queries you stand a
significant chance of locking rows
and not releasing them in a timely
manner.