how can we use lock APIs in infinispan without using transaction - configuration

We can configure locking in infinispan_config.xml file. So during coding can we use the lock without using the transaction for get and put method into cache ?
OR
Can I use locking in nontransactional cache of inifinispan?
i.e I would like to know if there is any method to unlock a key locked by cache.getAdvancedCache.lock(key) without using transaction manager commit ?

You can not use the locking API without transactions in Infinispan.

Related

Long running transactions in Slick

I'm working on a akka-http/slick web service, and I need to do the following in a transaction:
Insert a row in a table
Call some external web service
Commit the transaction
The web service I need to call is sometimes really slow to respond (let's say ~2 seconds).
I'm worried that this might keep the SQL connection open for too longer, and that'll exhaust Slick's connection pool and affect other independent requests.
Is this a possibility? Or does Slick do something to make sure this "idle" mid-transaction connection does not starve the pool?
If it is something I should be worried about - is there anything I can do to remedy this?
If it matters, I'm using MySQL with TokuDB.
The slick documentation seems to say that this will be a problem.
The use of a transaction always implies a pinned session.
And
You can use withPinnedSession to force the use of a single session, keeping the existing session open even when waiting for non-database computations.
From: http://slick.lightbend.com/doc/3.2.0/dbio.html#transactions-and-pinned-sessions

Ehcache non-transactional context with transactional cache

I have Ehcache cache instance configured as transactionalMode="local".
Now, when I try to put an element in said cache outside of a transaction, I get
net.sf.ehcache.transaction.TransactionException: transaction not started.
Does this mean that every call on transactional cache instance needs to be in a transaction context?
I'm doing some custom cache pre-loading on startup, and I don't want Ehcache transaction (and copyOnRead/Write) overhead. Also, since I'll be dealing with logically immutable objects, I'd like to be able to read them from cache without transaction scope, if possible.
Do you really need to use local transaction in the first place? i.e. do you need to put multiple cache entries atomically in a single operation?
In any case, if you use transactionalMode="local", you're kind of stuck having to perform all your cache operations within a transaction boundary (even reads)
But if you need more granularity, I'd recommend you look at ehcache explicit locking which can be used as a custom alternative to XA Transactions or Local transactions (without having to specify transactionalMode in your ehcache config). More at http://ehcache.org/documentation/apis/explicitlocking
Hope that helps.

Implementing locking mysql

I want to use mysql row level lock. I can't lock complete table. I want to avoid two process processing two different message for server at same time.
What I thought that I can have some table called:
server_lock and if one process start working on server it will insert a row in the table.
Problem with this approach is that if application crashes. We need to remove the lock manually.
Is there a way I may row level lock and lock will get released if application is crashing ?
Edit
I am using C++ as language.
My application is similar to message queue. But difference is that there is two queue which are getting populated by one process for each queue. After action if action belong to same object and both are processing same object it may result in wrong data. So I want a locking mechanism b/w these two queue so that both processor don't modify same object at same time.
I can think of two ways:
Implement some error handler on your program where you remove the lock. Without knowing anything about your program it is hard to say how to do this, but most languages have some method to do some work before exiting upon a crash. This is dangerous, because a crash happens when something is not right. If you continue to do any work, it is possible that you corrupt the database or something like that.
Periodically update the lock. Add a thread on your program that periodically reacquires the lock, or reacquire the lock in some loop you are doing. Then, when a lock is not updated in a while, you know that it belonged to a program that crashed.

Multiple processes accessing Django db backend; records not showing up until manually calling _commit

I have a Django project in which multiple processes are accessing the backend mysql db. One process is creating records, while a second process is trying to read those records. I am having an issue where the second process that is trying to read the records can't actually find the records until I manually call connection._commit().
This question has been asked before:
caching issues in MySQL response with MySQLdb in Django
The OP stated that he solved the problem, but didn't quite explain how. Can anyone shed some light on this? I'd like to be able to access the records without manually calling _commit().
Thanks,
Asif
He said:
Django's autocommit isn't an actual autocommit in the db.
So, you have to ensure that autocommit is set at the db level. Otherwise, because of transaction isolation, processes will not see changes made by a different process (different connection), until a commit is done. AFAIK this is not especially a Django issue, other than the lack of clarity in the docs about Django autocommit != db autocommit.
Update: Paraphrasing slightly from the MySQL docs:
REPEATABLE READ is the default
isolation level for InnoDB. For
consistent reads, there is an
important difference from the READ
COMMITTED isolation level: All
consistent reads within the same
transaction read the snapshot
established by the first read. (My
emphasis.)
So, with REPEATABLE READ you only get, on subsequent reads, what was read in the first read. With READ COMMITTED, each read creates and reads its own fresh snapshot, so that you see subsequent updates from other transactions. So - in answer to your comment - your change to the transaction level is correct.
Are you running the processes as views? If so, they're probably committing when the request is finished processing, but it sounds like you're running these processes concurrently. If you run the process outside of a view, they should commit on each save.

How To Mutex Across a Network?

I have a desktop application that runs on a network and every instance connects to the same database.
So, in this situation, how can I implement a mutex that works across all running instances that are connected to the same database?
In other words, I don't wan't that two+ instances to run the same function at the same time. If one is already running the function, the other instances shouldn't have access to it.
PS: Database transaction won't solve, because the function I wan't to mutex doesn't use the database. I've mentioned the database just because it can be used to exchange information across the running instances.
PS2: The function takes about ~30 minutes to complete, so if a second instance tries to run the same function I would like to display a nice message that it can't be performed right now because computer 'X' is already running that function.
PS3: The function has to be processed on the client machine, so I can't use stored procedures.
I think you're looking for a database transaction. A transaction will isolate your changes from all other clients.
Update:
You mentioned that the function doesn't currently write to the database. If you want to mutex this function, there will have to be some central location to store the current mutex holder. The database can work for this -- just add a new table that includes the computername of the current holder. Check that table before starting your function.
I think your question may be confusion though. Mutexes should be about protecting resources. If your function is not accessing the database, then what shared resource are you protecting?
put the code inside a transaction either - in the app, or better -inside a stored procedure, and call the stored procedure.
the transaction mechanism will isolate the code between the callers.
Conversely consider a message queue. As mentioned, the DB should manage all of this for you either in transactions or serial access to tables (ala MyISAM).
In the past I have done the following:
Create a table that basically has two fields, function_name and is_running
I don't know what RDBMS you are using, but most have a way to lock individual records for update. Here is some pseduocode based on Oracle:
BEGIN TRANS
SELECT FOR UPDATE is_running FROM function_table WHERE function_name='foo';
-- Check here to see if it is running, if not, you can set running to 'true'
UPDATE function_table set is_running='Y' where function_name='foo';
COMMIT TRANS
Now I don't have the Oracle PSQL docs with me, but you get the idea. The 'FOR UPDATE' clause locks there record after the read until the commit, so other processes will block on that SELECT statement until the current process commits.
You can use Terracotta to implement such functionality, if you've got a Java stack.
Even if your function does not currently use the database, you could still solve the problem with a specific table for the purpose of synchronizing this function. The specifics would depend on your DB and how it handles isolation levels and locking. For example, with SQL Server you would set the transaction isolation to repeatable read, read a value from your locking row and update it inside a transaction. Don't commit the transaction until your function is done. You can also use explicit table locks in a transaction on most databases which might be simpler. This is probably the simplest solution given you are already using a database.
If you do not want to rely on the database for whatever reason you could write a simple service that would accept TCP connections from your client. Each client would request permission to run and would return a response when done. The server would be able to ensure only one client gets permission to run at a time. Dead clients would eventually drop the TCP connection and be detected as long as you have the correct keep alive setting.
The message queue solution suggested by Xepoch would also work. You could use something like MSMQ or Java Message Queue and have a single message that would act as a run token. All your clients would request the message and then repost it when done. You risk a deadlock if a client dies before reposting so you would need to devise some logic to detect this and it might get complicated.