I am not sure if this is a behavior of the DB or the application. If a transaction fails at a point where some records had already been added into a table with auto-generated IDs, say ID 22, and there's rollback is there a way to re-use the ID 22? That way the table is restored to what it was -- even with regards to next auto ID -- before the failed transaction.
Here is the code I have been running with the point of failure indicated:
transaction
{
grC = ORMExecuteQuery( "FROM Relation1 WHERE somid=#form.someid#" );
if( ArrayLen( grC ) GT 0 )
{
oGR.id = [];
for( i =1; i LTE ArrayLen(grC); i = i + 1 )
{
grCNew = EntityNew( "Relation2" );
grCNew.setFirstName( grC[i].getFirstname() );
.........
EntitySave( grCNew );
ORMflush();
oGR.id[i] = grCNew.getID();
EntityDelete( grC ); //<<--- **POINT OF FAILURE**
}
}
}
How can I prevent losing IDs in Relation2 whenever there's transaction rollback due to a failure to delete corresponding entity/record in Relation1? Is it the database I should change or my application?
From the MySQL.com site, there will still be gaps. Not sure what your use case is but the gaps shouldn't be a problem. If you need consecutive numbers then you should either auto-generate them in another way (e.g. outside MySQL in PHP or other language). Good luck.
http://dev.mysql.com/doc/refman/5.0/en/innodb-auto-increment-handling.html
Related
I have a classic balance update concurrency situation:
using (var context = GenerateContext())
{
var account = context.Accounts.First(x => x.id == accountId);
var balanceBefore = account.balance;
// ... do some other stuff ...
account.balance = balanceBefore + depositAmount;
context.SubmitChanges();
}
I would like to lock the whole table for reads/writes during this update.
Can it be done with Linq2SQL?
Edit:
So I tried the following, which does lock the table during the transaction, but it doesn't preform the update - the balance never changes.
using (var context = GenerateContext())
{
context.Connection.Open();
context.Transaction = context.Connection.BeginTransaction(IsolationLevel.Serializable);
var account = context.ExecuteQuery<Account>("SELECT TOP 1 * FROM [Account] WHERE [Id] = 10 WITH (TABLOCKX)").First();
var balanceBefore = account.balance;
// ... do some other stuff ...
account.balance = balanceBefore + depositAmount;
context.SubmitChanges();
}
What am I doing wrong?
To start with, balance updates are best handled on the back end:
UPDATE Account
OUTPUT inserted.*
SET balance += #deposit
WHERE ID = #id;
But lets consider you can only code logic on the client. You should always use optimistic concurrency instead. The benefits of optimistic concurrency under load are too big to dismiss. While pessimistic concurrency is easier to code (as there is no update failure due to concurrency to handle), pessimistic concurrency performs poorly under load.
Linq2SQL supports optimistic concurrency, see Optimistic Concurrency: Overview. The key is the Updatecheck attribute. However, handling concurrency violations in the app is more tricky than just detecting them, but the topic is entirely dependent on the actual app logic.
For the extremely rare case when Optimistic concurrency is not appropriate, the first solution to try is ... still optimistic concurrency, but augmented with application locks.
Even if, say, the world would come to end if you used optimistic concurrency and you decide that you absolutely must take the pessimistic route, use (UPDLOCK, ROWLOCK) and have an appropriate index on ID.
Okay, I managed to make it work.
I had to encapsulate everything with a TransactionScope and apply TransactionOptions with isolation level to the transaction scope.
It now looks like this:
using (var context = GenerateContext())
{
var txnOptions = new TransactionOptions();
txnOptions.IsolationLevel = System.Transactions.IsolationLevel.Serializable;
using (var txnScope = new TransactionScope(TransactionScopeOption.Required, txnOptions))
{
var account = context.ExecuteQuery<Account>("SELECT TOP 1 * FROM [Account] WITH (TABLOCKX) WHERE [Id] = 10").First();
var balanceBefore = account.balance;
// ... do some other stuff ...
account.balance = balanceBefore + depositAmount;
context.SubmitChanges();
txnScope.Complete();
}
}
I've spent three days trying to figure out this, any help would be appreciated. I'm trying to open few tables (stored in InnoDB format) within the UDF in MySQL. I'm able to open them if I create a new THD instance and set it to be the "current thd". However, I'm having problem to properly close these tables. The code I used to open the tables looks like this
THD *thd = new THD;
thd->thread_stack = (char*) &thd;
thd->set_db(db_name, strlen(db_name));
my_pthread_setspecific_ptr(THR_THD, thd);
const unsigned int NUMBER_OF_TABLES = 5;
char* tableNames[NUMBER_OF_TABLES];
... set table name here ...
TABLE_LIST tables[NUMBER_OF_TABLES];
for (unsigned int i = 0; i < NUMBER_OF_TABLES; i++)
{
tables[i].init_one_table((char*)db_name, strlen(db_name), tableNames[i],
strlen(tableNames[i]), tableNames[i], TL_WRITE_ALLOW_WRITE);
if (i != NUMBER_OF_TABLES-1)
tables[i].next_local = tables[i].next_global = tables + 1 + i;
}
}
if (open_and_lock_tables(thd, tables, false, MYSQL_LOCK_IGNORE_TIMEOUT))
{
goto end;
}
I was able to open all tables using the above code block. However, when I finish using them, I was not able to close these tables since assertion fails. I would appreciate if anyone can help with these. I use the following code block to close the tables,
thd->cleanup_after_query();
close_thread_tables(thd);
delete thd;
The failed assertion was to do with the thd->transaction.stmt.is_empty().
The is_empty() method simply checks whether ha_list == NULL.
I would like to understand how can I properly close these tables??
I'm developing on Windows 7 with MySQL 5.6.
In the LockService documentation: https://developers.google.com/apps-script/service_lock it states that "getPublicLock() - Gets a lock that prevents concurrent access to a section of code by simultaneous executions for the current user"
So the query is around the comment: "section of code". If I have multiple sections of code that use the LockService.getPublicLock(), are they essentially independent locks?
For example:
function test1() {
var lock = LockService.getPublicLock();
if (lock.tryLock(10000)) {
// Do some critical stuff
lock.releaseLock();
}
}
function test2() {
var lock = LockService.getPublicLock();
if (lock.tryLock(10000)) {
// Do some critical stuff
lock.releaseLock();
}
}
If I have two invocations of my script concurrently executing, with one user accessing test1() and another user accessing test2(), will they both succeed? Or as it alludes to in this post: http://googleappsdeveloper.blogspot.co.uk/2011/10/concurrency-and-google-apps-script.html are the locks simply at the script level? So for this scenario, only one of test1() or test2() would succeed but not both.
If it is truly as the documentation states, and both will succeed, what denotes a 'section of code' ?? Is it the line numbers that the LockService.getPublicLock() appears on or is it the surrounding function?
There is only one public lock and only one private lock.
If you wish to have several locks you'll need to implement some sort of named lock service yourself. An example below, using the script database functionality:
var validTime = 60*1000; // maximum number of milliseconds for which a lock may be held
var lockType = "Named Locks"; // just a type in the database to identify these entries
function getNamedLock( name ) {
return {
locked: false,
db : ScriptDb.getMyDb(),
key: {type: lockType, name:name },
lock: function( timeout ) {
if ( this.locked ) return true;
if ( timeout===undefined ) timeout = 10000;
var endTime = Date.now()+timeout;
while ( (this.key.time=Date.now()) < endTime ) {
this.key = this.db.save( this.key );
if ( this.db.query(
{type: lockType,
name:this.key.name,
time:this.db.between( this.key.time-validTime, this.key.time+1 ) }
).getSize()==1 )
return this.locked = true; // no other or earlier key in the last valid time, so we have it
db.remove( this.key ); // someone else has, or might be trying to get, this lock, so try again
Utilities.sleep(Math.random()*200); // sleep randomly to avoid another collision
}
return false;
},
unlock: function () {
if (this.locked) this.db.remove(this.key);
this.locked = false;
}
}
}
To use this service, we'd do something like:
var l = getNamedLock( someObject );
if ( l.lock() ) {
// critical code, can use some fields of l for convenience, such as
// l.db - the database object
// l.key.time - the time at which the lock was acquired
// l.key.getId() - database ID of the lock, could be a convenient unique ID
} else {
// recover somehow
}
l.unlock();
Notes:
This assumes that the database operation db.save() is essentially indivisible - I think it must be, because otherwise there would be BIG trouble in normal use.
Because the time is in milliseconds we have to assume that it is possible for more than one task to try the lock with the same time stamp, otherwise the function could be simplified.
We assume that locks are never held for more than a minute (but you can change this) since the execution time limit will stop your script anyway.
Periodically you should remove all locks from the database that are more than a minute old, to save it getting cluttered with old locks from crashed scripts.
this question is too old and right now "getPublicLock()" is no Longer available.
according to https://developers.google.com/apps-script/reference/lock
at this time LockServise present these three scope:
getDocumentLock(): Gets a lock that prevents any user of the current document from concurrently running a section of code.
getScriptLock(): Gets a lock that prevents any user from concurrently running a section of code.
getUserLock():Gets a lock that prevents the current user from concurrently running a section of code.
I have a need to select a set of records which contain an IsLocked field. I then need to immediately update the IsLocked value from false to true, within a transaction, such that other programs do not fetch those already-fetched records for processing, but can fetch other, unlocked records.
Here's the code I have so far. Is this correct? And how do I do the update? Visit each record in a foreach, update the value and then SubmitChanges()? It seems that when I run the code below, I lose the collection associated with emails, thus cannot do the processing I need to do. Does closing the transaction early result in losing the records loaded?
To focus the question: how does one load-and-update-in-a-transaction records, close the transaction to not lock for any longer than necessary, process the records, then save subsequent changes back to the database.
using (ForcuraDaemonDataContext ctx = new ForcuraDaemonDataContext(props.EmailLogConnectionString))
{
System.Data.Common.DbTransaction trans = null;
IQueryable<Email> emails = null;
try
{
// get unlocked & unsent emails, then immediately lock the set for processing
ctx.Connection.Open();
trans = ctx.Connection.BeginTransaction(IsolationLevel.ReadCommitted);
ctx.Transaction = trans;
emails = ctx.Emails.Where(e => !(e.IsLocked || e.IsSent));
/// ???
ctx.SubmitChanges();
trans.Commit();
}
catch (Exception ex)
{
if (trans != null)
trans.Rollback();
eventLog.WriteEntry("Error. Could not lock and load emails.", EventLogEntryType.Information);
}
finally
{
if (ctx.Connection.State == ConnectionState.Open)
ctx.Connection.Close();
}
// more stuff on the emails here
}
Please see this question for an answer to a similar, simpler form of the problem.
I got two process which sometimes run concurrently.
First one is Bulk insert
using (var connection = new SqlConnection(connectionString))
{
var bulkCopy = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.TableLock)
{DestinationTableName = "CTRL.CTRL_DATA_ERROR_DETAIL",};
connection.Open();
try
{
bulkCopy.WriteToServer(errorDetailsDt);
}
catch (Exception e)
{
throw new Exception("Error Bulk writing Error Details for Data Stream ID: " + dataStreamId +
" Details of Error : " + e.Message);
}
connection.Close();
}
Second one is Bulk Update from Stored Procedure
--Part of code from Stored Procedure--
UPDATE [CTL].[CTRL].[CTRL_DATA_ERROR_DETAIL]
SET [MODIFIED_CONTAINER_SEQUENCE_NUMBER] = #containerSequenceNumber
,[MODIFIED_DATE] = GETDATE()
,[CURRENT_FLAG] = 'N'
WHERE [DATA_ERROR_KEY] = #DataErrorKey
AND [CURRENT_FLAG] ='Y'
First process run for bit of time (depending on incoming record load) and second process always gets deadlock victim.
Should I set SqlBulkCopyOptions.TableLock so that second process waits until the resources are released.
By default SqlBulkCopy doesn't take exclusive locks so while it's doing it's thing and inserting data your update process kicks off and thus causes a deadlock. To get around this you could instruct SqlBulkCopy to take an exclusive table lock as you already suggested or you can set the batch size of the bulk insert to a reasonable number.
If you can get away with it I think the table lock idea is the best option.