Database-safe Concurrency in ROR - mysql

Trying to figure out how concurrency is dealt with in Ruby On Rails.
How do I get a segment of code to lock on lines in the database and force rollbacks when needed?
More specificly, Is there a way to force a certain segment of code to complete totally and if not rollback? I'm looking to add history to transactions in my project and I don't want transactions to commit without the history being saved so if the server falls between the two actions (saving the transaction and saving the history) the database may go into an illegal state.

You want to look at ActiveRecord Transactions and Pessimistic Locking.
Account.transaction do
account = Account.find(account_id)
account.lock!
if account.balance >= 100
account.balance -= 100
account.save
end
end

Yes, you have a way of implementing a transaction in Rails. An example:
YourModel.transaction do
rec1.save!
rec2.save!
end
More info here

Related

Can a single SQL connection run concurrent transactions?

I'm trying to get my head around the issue of how multiple transactions interact with the underlying connections that an SQL client library has at its disposal. Some libraries abstract this so you don't have to think about how transactions are handled. However, I want to know what happens in this scenario:
Suppose we only have a single connection
Suppose that we want to run 2 transactions concurrently.
Is this possible over a single connection? Or, if we want concurrency, at least the same number of connections is needed?
To make myself clearer, I will write some pseudo-SQL-code, remember that this is running in a single connection:
BEGIN TRANSACTION 1
BEGIN TRANSACTION 2
WITH TRANSACTION 2 DO SOMETHING
WITH TRANSACTION 1 DO SOMETHING ELSE
COMMIT TRANSACTION 1
COMMIT TRANSACTION 2
Further questions
How is this handled by different vendors?
How is this handled by MySQL, specifically?
Is there another way of handling this problem that I'm not aware of?

What is the best way to lock records for processing in Rails?

It's easy with simple transactions, however with complex work thats not that clear.
Let's assume we have models with some relations, for example lets take User and Account, in Controller we validate that User has enough balance and his Account is also valid. On that step we can query record with pessimistic locking, however it wouldn't help, because we release lock as soon as we receive data.
Those validations took some time and then we actually start doing work, querying remote API's and do complex computations, whatever.. While everything is in process we still need to hold lock on both User and Account models.
And here goes the question, what is the best way to lock records, should one add mutex in Controller to cover validation and work? Or do another validation in open transaction when models fetched again with pessimistic locking? Or maybe theres another way I missed?
Pseudo Code
index_controller.rb
def bet
bet = Bet.new bet_params
if bet.valid?
BetService.make_bet(bet) #do actual work, and
end
end
bet.rb
class Bet
# some code
validate :balance # self.user.lock!.balance > X
end
bet_service.rb
class BetService
def self.make_bet(bet)
#some long work
ActiveRecord::Base.transaction do
bet.user.lock!.balance -= X
end
end
end
You can use ActiveRecord transactions if I get your question right:
ActiveRecord::Base.transaction do
# all queries in here are in a single SQL transaction
end
If there are long delays, I would say to use an optimistic lock. If there are no delays, then a pessimistic lock is better.
If you need to update information in a remote API with some data you validated locally, I would say to use a pessimistic lock.

Updating account balances with mysql

I have a field on a User table that holds the account balance for the user. Users can perform a lot of actions with my service that will result in rapid changes to their balance.
I'm trying to use mysql's serializable isolation level to make sure that multiple user actions will not update the value incorrectly. (Action A and action B simultaneously want to deduct 1 dollar from the balance.) However, I'm getting a lot of deadlock errors.
How do I do this correctly without getting all these deadlocks, and still keeping the balance field up to date?
simple schema: user has an id and a balance.
im using doctrine, so i'm doing something like the following:
$con->beginTransaction();
$tx = $con->transaction;
$tx->setIsolation('SERIALIZABLE');
$user = UserTable::getInstance()->find($userId);
$user->setBalance($user->getBalance() + $change);
$user->save();
$con->commit();
First trying to use serializable isolation level on your transaction is a good idea. It means you know at least a minimum what a transation is, and that the isolation level is one of the biggest problem.
Note that serializable is not really a true seriability. More on that on this previous answer, when you'll have some time to read it :-).
But the most important part is that you should consider that having automatic rollbacks on your transaction because of failed serialibility is a normal fact, and that the right thing to do is building your application so that transactions could fail and should be replayed.
One simple solution, and for accounting things I like this simple solution as we can predict all the facts, no suprises, so, one solution is to perform table locks. This is not a fine and elegant solution, no row levels locks, just simple big table locks (and always in the same order). After that you can do your operation as a single player and then release teh locks. Not multi user concurrency on the rows of the tables, no next-row magical locks fails (see previous link). This will certainly slow down your write operations, but if everybody performs the table locks in the same order you'll only get locks timeouts problems, no deadlocks and no 'unserializable auto-rollback'.
Edit
From your code sample I'm not sure you can set the transaction isolation level after the begin. You should activate query logs on MySQL and seewhat is done, then check that other transactions runned by the CMS are not still in the serializable level.

MySQL table locking for a multi user JSP/Servlets site

Hi I am developing a site with JSP/Servlets running on Tomcat for the front-end and with a MySql db for the backend which is accessed through JDBC.
Many users of the site can access and write to the database at the same time ,my question is :
Do i need to explicitly take locks before each write/read access to the db in my code?
OR Does Tomcat handle this for me?
Also do you have any suggestions on how best to implement this ? I have written a significant amount of JDBC code already without taking the locks :/
I think you are thinking about transactions when you say "locks". At the lowest level, your database server already ensure that parallel read writes won't corrupt your tables.
But if you want to ensure consistency across tables, you need to employ transactions. Simply put, what transactions provide you is an all-or-nothing guarantee. That is, if you want to insert a Order in one table and related OrderItems in another table, what you need is an assurance that if insertion of OrderItems fails (in step 2), the changes made to Order tables (step 1) will also get rolled back. This way you'll never end up in a situation where an row in Order table have no associated rows in Order items.
This, off-course, is a very simplified representation of what a transaction is. You should read more about it if you are serious about database programming.
In java, you usually do transactions by roughly with following steps:
Set autocommit to false on your jdbc connection
Do several insert and/or updates using the same connection
Call conn.commit() when all the insert/updates that goes together are done
If there is a problem somewhere during step 2, call conn.rollback()

How can i found out if there is transaction open in MySQL?

How can i find out if there is transaction open in mySQL? I need to start new one if there is no transaction open, but i don't want to start new one if there is one running, because that would commit that running transaction.
UPDATE:
i need to query database in one method of my application, but that query could be called as part of bigger transaction, or just as it should be a transaction on its own. Changing application to track if it has open a transaction would be more difficult, as it could started from many pieces of code. Although it would be possible, i'm looking for solution that would be faster to implement. Simple if statement in sql would be effortless.
Thank you
I am assuming you are doing this as a one-off and not trying to establish something that can be done programatically. You can get the list of currently active processes using: SHOW PROCESSLIST
http://dev.mysql.com/doc/refman/5.1/en/show-processlist.html
If you want something programatic, then I would suggest an explicit lock on a table, at the beginning of your transaction and release it at the end.