Locking Mysql Transaction - mysql

We have a table (say, child) that has a relation to another table (say, parent). In a perfect world, it will always have a parent row, and sometimes a child row. It should never have more than one child row, but it may in the future (so a Unique index is not suitable long-term).
Right now, we use transactions and lock the rows. However, because Mysql locks the rows based to the point in time in which it starts, each transaction (if one starts before the other commits) is able to create their own row. Then, on insert, each insert takes effect and we end up with two rows. I think each row is locking its own row, and without committing, it is hidden to the other thread. Basically, transactions have a chicken and egg type of problem.
How can we enforce a policy of up to one row? We can add a unique index and at the time the transaction commits, it will fail. But then we remove the ability to add multiple rows in the future (when one parent would have two child's), which is problematic.
This has to be solved somehow. I just don't know how personally.
Edit 1: Updated Schema information (I'm using a job schema to represent the problem)
Table: job (the "Parent")
job_id
job_title
job_payment
Table: job_asignment (the "Child")
job_id
user_id (assigned worker)
est_hours
opt_insurance
Our application is a SaaS-based product that helps manage workflows. We check whether everything necessary is okay before hand (like whether the job is still in the right status, whether the person trying to accept the job was given access, and so on). Then, if that is true, we assign him (insert or update the row in the job_assignment table).
Our problem is that our system takes 2 to 3 seconds for the rest of the assignment to happen (place payment holds, insert the actual row, email the worker that they are assigned, move the status to assigned, and so on). During this time, another user also tries to accept the job, his thread validates everything before (where we check if its still available), and it is. We then start the process on him too, since each thread is a transaction and the changes haven't been committed.
Then, we get two assignment rows. For us, thats bad right now since we only pay one worker.
We would use application locking with temp files or something, but were on a load balanced (HA) environment and cannot guarantee that both users hit the same server.
This seems really rudimentary, but I can't figure out how to solve it. Other than a unique index, the only other way is to highly invest in hardware for the DB and get that window as small as we can.
Does this clarify anything?

Related

What's the correct way to protect against multiple sessions getting the same data?

Let's say I have a table called tickets which has 4 rows, each representing a ticket to a show (in this scenario these are the last 4 tickets available to this show).
3 users are attempting a purchase simultaneously and each want to buy 2 tickets and all press their "purchase" button at the same time.
Is it enough to handle the assignment of each set of 2 via a TRANSACTION or do I need to explicitly call LOCK TABLE on each assignment to protect against the possibility that 2 of the tickets will be assigned to two users.
The desire is for one of them to get nothing and be told that the system was mistaken in thinking there were available tickets.
I'm confused by the documentation which says that the LOCK will be implicitly released when I start a TRANSACTION, and was hoping to get some clarity on the correct way to handle this.
If you use a transaction, MySQL takes care of locking automatically. That's the whole point of transactions -- they totally prevent any kind of interference due to overlapping requests.
You could use "optimistic locking": When updating the ticket as sold, make sure you include the condition that the ticket is still available. Then check if the update failed (you get a count of rows updated, can be 1 or 0).
For example, instead of
UPDATE tickets SET sold_to = ? WHERE id = ?
do
UPDATE tickets SET sold_to = ? WHERE id = ? AND sold_to IS NULL
This way, the database will assure that you don't get conflicting updates. No need for explict locking (the normal transaction isolation will be sufficient).
If you have two tickets, you still need to wrap the two calls into a single transaction (and roll back if either of them failed.

How to retrieve the new rows of a table every minute

I have a table, to which rows are only appended (not updated or deleted) with transactions (I'll explain why this is important), and I need to fetch the new, previously unfetched, rows of this table, every minute with a cron.
How am I going to do this? In any programming language (I use Perl but that's irrelevant.)
I list the ways I thought of how to solve this problem, and ask you to show me the correct one (there HAS to be one...)
The first way that popped to my head was to save (in a file) the largest auto_incrementing id of the rows fetched, so in the next minute I can fetch with: WHERE id > $last_id. But that can miss rows. Because new rows are inserted in transactions, it's possible that the transaction that saves the row with id = 5 commits before the transaction that saves the row with id = 4. It's therefore possible that the cron script retrieves row 5 but not row 4, and when row 4 gets committed one split second later, it will never gets fetched (because 4 is not > than 5 which is the $last_id).
Then I thought I could make the cron job fetch all rows that have a date field in the last TWO minutes, check which of these rows have been retrieved again in the previous run of the cron job (to do this I would need to save somewhere which row ids were retrieved), compare, and process only the new ones. Unfortunately this is complicated, and also doesn't solve the problem that will occur if a certain inserting transaction takes TWO AND A HALF minutes to commit for some weird database reason, which will cause the date to be too old for the next iteration of the cron job to fetch.
Then I thought of installing a message queue (MQ) like RabbitMQ or any other. The same process that does the inserting transaction, would notify RabbitMQ of the new row, and RabbitMQ would then notify an always-running process that processes new rows. So instead of getting a batch of rows inserted in the last minute, that process would get the new rows one-by-one as they are written. This sounds good, but has too many points of failure - RabbitMQ might be down for a second (in a restart for example) and in that case the insert transaction will have committed without the receiving process having ever received the new row. So the new row will be missed. Not good.
I just thought of one more solution: the receiving processes (there's 30 of them, doing the exact same job on exactly the same data, so the same rows get processed 30 times, once by each receiving process) could write in another table that they have processed row X when they process it, then when time comes they can ask for all rows in the main table that don't exist in the "have_processed" table with an OUTER JOIN query. But I believe (correct me if I'm wrong) that such a query will consume a lot of CPU and HD on the DB server, since it will have to compare the entire list of ids of the two tables to find new entries (and the table is huge and getting bigger each minute). It would have been fast if the receiving process was only one - then I would have been able to add a indexed field named "have_read" in the main table that would make looking for new rows extremely fast and easy on the DB server.
What is the right way to do it? What do you suggest? The question is simple, but a solution seems hard (for me) to find.
Thank you.
I believe the 'best' way to do this would be to use one process that checks for new rows and delegates them to the thirty consumer processes. Then your problem becomes simpler to manage from a database perspective and a delegating process is not that difficult to write.
If you are stuck with communicating to the thirty consumer processes through the database, the best option I could come up with is to create a trigger on the table, which copies each row to a secondary table. Copy each row to the secondary table thirty times (once for each consumer process). Add a column to this secondary table indicating the 'target' consumer process (for example a number from 1 to 30). Each consumer process checks for new rows with its unique number and then deletes those. If you are worried that some rows are deleted before they are processed (because the consumer crashes in the middle of processing), you can fetch, process and delete them one by one.
Since the secondary table is kept small by continuously deleting processed rows, INSERTs, SELECTs and DELETEs would be very fast. All operations on this secondary table would also be indexed by the primary key (if you place the consumer ID as first field of the primary key).
In MySQL statements, this would look like this:
CREATE TABLE `consumer`(
`id` INTEGER NOT NULL,
PRIMARY KEY (`id`)
);
INSERT INTO `consumer`(`id`) VALUES
(1),
(2),
(3)
-- all the way to 30
;
CREATE TABLE `secondaryTable` LIKE `primaryTable`;
ALTER TABLE `secondaryTable` ADD COLUMN `targetConsumerId` INTEGER NOT NULL FIRST;
-- alter the secondary table further to allow several rows with the same primary key (by adding targetConsumerId to the primary key)
DELIMTER //
CREATE TRIGGER `mark_to_process` AFTER INSERT ON `primaryTable`
FOR EACH ROW
BEGIN
-- by doing a cross join with the consumer table, this automatically inserts the correct amount of rows and adding or deleting consumers is just a matter of adding or deleting rows in the consumer table
INSERT INTO `secondaryTable`(`targetConsumerId`, `primaryTableId`, `primaryTableField1`, `primaryTableField2`) SELECT `consumer`.`id`, `primaryTable`.`id`, `primaryTable`.`field1`, `primaryTable`.`field2` FROM `consumer`, `primaryTable` WHERE `primaryTable`.`id` = NEW.`id`;
END//
DELIMITER ;
-- loop over the following statements in each consumer until the SELECT doesn't return any more rows
START TRANSACTION;
SELECT * FROM secondaryTable WHERE targetConsumerId = MY_UNIQUE_CONSUMER_ID LIMIT 1;
-- here, do the processing (so before the COMMIT so that crashes won't let you miss rows)
DELETE FROM secondaryTable WHERE targetConsumerId = MY_UNIQUE_CONSUMER_ID AND primaryTableId = PRIMARY_TABLE_ID_OF_ROW_JUST_SELECTED;
COMMIT;
I've been thinking on this for a while. So, let me see if I got it right. You have a HUGE table in which N, amount which may vary in time, processes write (let's call them producers). Now, there are these M, amount which my vary in time, other processes that need to at least process once each of those records added (let's call them consumers).
The main issues detected are:
Making sure the solution will work with dynamic N and M
It is needed to keep track of the unprocessed records for each consumer
The solution has to escalate as much as possible due to the huge amount of records
In order to tackle those issues I thought on this. Create this table (PK in bold):
PENDING_RECORDS(ConsumerID, HugeTableID)
Modify the consumers so that each time they add a record to the HUGE_TABLE they also add M records to the PENDING_RECORDS table so that it has the HugeTableID and also each of the ConsumerID that exist at that time. Each time a consumer runs it will query the PENDING_RECORDS table and will find a small amount of matches for itself. It will then join against the HUGE_TABLE (note it will be an inner join, not a left join) and fetch the actual data it needs to process. Once the data is processed then the consumer will delete the records fetched from the PENDING_RECORDS table, keeping it decently small.
Interesting, i must say :)
1) First of all - is it possible to add a field to the table that has rows only added (let's call it 'transactional_table')? I mean, is it a design paradigm and you have a reason not to do any sort of updates on this table, or is it "structurally" blocked (i.e. user connecting to db has no privileges to perform updates on this table) ?
Because then the simplest way to do it is to add "have_read" column to this table with default 0, and update this column on fetched rows with 1 (even if 30 processess do this simultanously, you should be fine as it would be very fast and it won't corrupt your data). Even if 30 processess mark the same 1000 rows as fetched - nothing is corrupt. Although if you do not operate on InnoDB, this might be not the best way as far as performance is concerned (MyISAM locks whole tables on updates, InnoDB only rows that are updated).
2) If this is not what you could use - I would surely check out the solution you gave as your last one, with a little modification. Create a table (let's say: fetched_ids), and save fetched rows' ids in that table. Then you could use something like :
SELECT tt.* from transactional_table tt
RIGHT JOIN fetched_ids fi ON tt.id = fi.row_id
WHERE fi.row_id IS NULL
This will return the rows from you transactional table, that have not been saved as already fetched. As long as both (tt.id) and (fi.row_id) have (ideally unique) indexes, this should work just fine even on large sets of data. MySQL handles JOINS on indexed fields pretty well. Do not fear trying out - create new table, copy ids to it, delete some of them and run your query. You'll see the results and you'll know if they are satisfactory :)
P.S. Of course, adding rows to this 'fetched_ids' table should be ran carefully not to create unnecessary duplicates (30 simultaneous processes could write 30 times the data you need - and if you need performance, you should watch out for this case).
How about a second table with a structure like this:
source_fk - this would hold an ID of the data rows you want to read.
process_id - This would be a unique id for one of the 30 processes.
then do a LEFT JOIN and exclude items from your source that have entries matching the specified process_id.
once you get your results, just go back and add the source_fk and process_id for each result you get.
One plus about this is you can add more processes later on with no problem.
I would try adding a timestamp column and use it as a reference when retrieving new rows.

How to synchronize table updates

I've single database table containing some financial information. Multiple users may be viewing and updating at the same time from a web form on their computers.
What I want is that anyone who does an update must be doing based on latest table contents. I mean two people may click update at the same time. Say first person's update is successful. Now the second person's update is based on stale information and did not get chance to see the latest update from the first person.
How to avoid such situation?
you have to set the isolation level of your database server to REPEATABLE READ at least. When it's used, the dirty reads and nonrepeatable reads cannot occur. It means that locks will be placed on all data that is used in a query, and another transactions cannot update the data.

How to avoid duplicate entries on INSERT in MySQL?

My application is generating the ID numbers when registering a new customer then inserting it into the customer table.
The method for generating the ID is by reading the last ID number then incrementing it by one then inserting it into the table.
The application will be used in a network environment with more than 30 users, so there is a possibility (probability?) for at least two users to read the same last ID number at the saving stage, which means both will get the same ID number.
Also I'm using transaction. I need a logical solution that I couldn't find on other sites.
Please reply with a description so I can understand it very well.
use an autoincrement, you can get the last id issued with the mysql_insert_id property.
If for some reason that's not doable, you can craete another table to hold the last id used, then you increment that in a transaction, and then use it as the key for your insert into the table. Got to be two transctions though, otherwise you'll have the same issue you have now. That can get messy and is an extra level of maintenance though. (reset your next id table to zero when ther are still some in teh related table and things go nipples up quick.
Short of putting an exclusive lock on the table during the insert operation (not even slightly recomended), your current solution just can't work.
Okay expanded answer based on leaving schema as it is.
Option 1 in pseudo code
StartTransaction
try
NextId = GetNextId(...)
AddRecord(NextID...)
commit transaction
catch Primary Key Violation
rollback transaction
Do the entire thing again
end
Obviously you could end up in an infinite loop here, unlikely but possible, probably run out of stack space first.
You could some how queue the requests and then attempt to process them, if successful remove from queue.
BUT make customerid an auto inc the entire problem dispappears.
It will still be the primary key, you just don't have to work out what it needs to be any more, in fact you don't supply it in the insert statement, mysql will just take care of it for you.
The only thing you have to remember is if you need the id that has been automatically created is to request it in one transaction.
So your insert query needs to be in the form
Insert SomeTable(SomeColumns) Values(SomeValues)
Select mysql_insert_id
or if multiple statements gets in the way wrap two statements in a start stransaction commit transaction pair.

What kind of locking/transaction isolation level is appropriate for this situation?

Let's say I have a Student and a School table. One operation that I am performing is this:
Delete all Students that belong to a School
Modify the School itself (maybe change the name or some other field)
Add back a bunch of students
I am not concerned about this situation: Two people edit the School/Students at the same time. One submits their changes. Shortly after, someone else submits their changes. This won't be a problem because, in the second user's case, the application will notice that they are attempting to overwrite a new revision.
I am concerned about this: Someone opens the editor for the Schools/Students (which involves reading from the tables) while at the same time a transaction that is modifying them is running.
So basically, a read should not be able to run while a transaction is modifying the tables. Additionally, a write shouldn't be able to occur at the same time either.
Only in serializable isolation level MySQL won't allow you to read the rows that are being modified by another transaction. In any lower isolation level, you will see the rows in the state they were before the transaction, that modifies them, have been started. Of course, in READ_UNCOMITTED, the rows will be seen as deleted / modified, although transaction hasn't been completed.
If you use select for update,
You can use locking of tables to prevent this. Check this for more info on lock tables
EDIT
Have a look at this how to lock some row as they don't be selected in other transaction . Think a similar method can be applied for tables also