What kind of locking/transaction isolation level is appropriate for this situation? - mysql

Let's say I have a Student and a School table. One operation that I am performing is this:
Delete all Students that belong to a School
Modify the School itself (maybe change the name or some other field)
Add back a bunch of students
I am not concerned about this situation: Two people edit the School/Students at the same time. One submits their changes. Shortly after, someone else submits their changes. This won't be a problem because, in the second user's case, the application will notice that they are attempting to overwrite a new revision.
I am concerned about this: Someone opens the editor for the Schools/Students (which involves reading from the tables) while at the same time a transaction that is modifying them is running.
So basically, a read should not be able to run while a transaction is modifying the tables. Additionally, a write shouldn't be able to occur at the same time either.

Only in serializable isolation level MySQL won't allow you to read the rows that are being modified by another transaction. In any lower isolation level, you will see the rows in the state they were before the transaction, that modifies them, have been started. Of course, in READ_UNCOMITTED, the rows will be seen as deleted / modified, although transaction hasn't been completed.
If you use select for update,

You can use locking of tables to prevent this. Check this for more info on lock tables
EDIT
Have a look at this how to lock some row as they don't be selected in other transaction . Think a similar method can be applied for tables also

Related

Locking Mysql Transaction

We have a table (say, child) that has a relation to another table (say, parent). In a perfect world, it will always have a parent row, and sometimes a child row. It should never have more than one child row, but it may in the future (so a Unique index is not suitable long-term).
Right now, we use transactions and lock the rows. However, because Mysql locks the rows based to the point in time in which it starts, each transaction (if one starts before the other commits) is able to create their own row. Then, on insert, each insert takes effect and we end up with two rows. I think each row is locking its own row, and without committing, it is hidden to the other thread. Basically, transactions have a chicken and egg type of problem.
How can we enforce a policy of up to one row? We can add a unique index and at the time the transaction commits, it will fail. But then we remove the ability to add multiple rows in the future (when one parent would have two child's), which is problematic.
This has to be solved somehow. I just don't know how personally.
Edit 1: Updated Schema information (I'm using a job schema to represent the problem)
Table: job (the "Parent")
job_id
job_title
job_payment
Table: job_asignment (the "Child")
job_id
user_id (assigned worker)
est_hours
opt_insurance
Our application is a SaaS-based product that helps manage workflows. We check whether everything necessary is okay before hand (like whether the job is still in the right status, whether the person trying to accept the job was given access, and so on). Then, if that is true, we assign him (insert or update the row in the job_assignment table).
Our problem is that our system takes 2 to 3 seconds for the rest of the assignment to happen (place payment holds, insert the actual row, email the worker that they are assigned, move the status to assigned, and so on). During this time, another user also tries to accept the job, his thread validates everything before (where we check if its still available), and it is. We then start the process on him too, since each thread is a transaction and the changes haven't been committed.
Then, we get two assignment rows. For us, thats bad right now since we only pay one worker.
We would use application locking with temp files or something, but were on a load balanced (HA) environment and cannot guarantee that both users hit the same server.
This seems really rudimentary, but I can't figure out how to solve it. Other than a unique index, the only other way is to highly invest in hardware for the DB and get that window as small as we can.
Does this clarify anything?

Tree editing by multiple users simultaneously?

I have a tree like hierarchical category which I need to save in a DB. I used MPTT (nested sets) to save this data.
The problem is that this category needs to be editable by multiple users, sometimes simultaneously.
How to preserve the integrity of the structure, without putting too much constraints on the users?
Given the nature of MPTT that when changing an element in the structure it affects other elements too (changing of the left / right values).
For ex. User A deletes Node1 and User B adds Leaf1 under Node1. This should give an error to User B that Node1 does not exists anymore, but I believe it would just create confusion for User B...
Are there any practical solutions for this problem?
What you are looking for is optimistic concurrency. It means, you allow the user to begin editing the record but before you apply the changes, you check if the record is in the same state while it was when user started editing.
Other scenario is to lock all the records that might get affected by editing but it will restrict the user from making any changes.
How would this differ from any other multi-user transactional system?
Enclose your user operation in a transaction, so that all participating tables will be locked. Then you validate the inputs against the current data and perform the update.

How to synchronize table updates

I've single database table containing some financial information. Multiple users may be viewing and updating at the same time from a web form on their computers.
What I want is that anyone who does an update must be doing based on latest table contents. I mean two people may click update at the same time. Say first person's update is successful. Now the second person's update is based on stale information and did not get chance to see the latest update from the first person.
How to avoid such situation?
you have to set the isolation level of your database server to REPEATABLE READ at least. When it's used, the dirty reads and nonrepeatable reads cannot occur. It means that locks will be placed on all data that is used in a query, and another transactions cannot update the data.

Question for Conflict in insertion of data in DB by user and admin, see below for description

I have a case that what will happen when at one end Admin is editing the Details of user "A" in a table "users" and at the same time user "A" itself edits its details in table users. Whose effect will reflected.. And what can be done to make it specific to some one or to give the priority?
Thanks and Regards...
As Michael J.V. says, the last one wins - unless you have a locking mechanism, or build application logic to deal with this case.
Locking mechanisms tend to dramatically reduce the performance of your database.
http://dev.mysql.com/doc/refman/5.5/en/internal-locking.html gives an overview of the options in MySQL. However, the scenario you describe - Admin accesses record, has a lock on that record until they modify the record - will cause all kinds of performance issues.
The alternative is to check for a "dirty" record prior to writing the record back. Pseudocode:
User finds record
Application stores (hash of) record in memory
User modifies copy of record
User instructs application to write record to database
Application retrieves current database state, compares to original
If identical
write change to database
If not identical
notify user
In this model, the admin's change would trigger the "notify user" flow; your application may decide to stop the write, or force the user to refresh the record from the database prior to modifying it and trying again.
More code, but far less likely to cause performance/scalability issues.

How to LOCK a row in MySQL?

I'm working on a CRM desktop application which is going to be used by more than one agent at a time, and all agents will go through the same list of customers. What I need to do here is to avoid conflicts between agents, so once an agent selects a customer from the list, others shouldn't be able to see that row anymore, or in another words they shouldn't be able to select that customer row until the first agent is done ! The simplest way that comes in mind wich may sound stupid is to add two fields LOCK(BIT), LOCK_EXPIRY(DATETIME) and manage that. I don't know But I think there should be another way to lock a row for particular session. I searched on Google and I found two InnoDB locking methods but I'm not sure if those can help me here in this case.
I suggest you add the two fields you described, except replace LOCK(BIT) with LOCKED_BY(AGENT_ID). Otherwise if the agent that has locked the list of customers refreshes his/her page, the locked rows may disappear until the lock expires.
I think you can use GET_LOCK() function with MySQL:
http://dev.mysql.com/doc/refman/5.0/en/miscellaneous-functions.html#function_get-lock