How I can implement rollback transaction in wp7. Presently my issue is after insertion or deletion I am calling submits changes, In that time if i made a tombstone the app exits. how I can handle this situation I am planning to use try catch and if any exception caught means I need to rollback the changes. Please anyone help me to implement the same in wp7.
Why do you need to rollback when the application becomes tombstoned? Technically your application is not aware of when it is tombstoned, you are only aware of when it becomes de-activated. See the following lifecycle diagram:
(The image above is from the blog post http://www.scottlogic.co.uk/blog/colin/2011/10/a-windows-phone-7-1-mango-mvvm-tombstoning-example/ which describes the lifecycle in detail)
Whenever you application is de-activated, you can handle the Deactivated event. From MSDN:
Applications are given 10 seconds to complete the Deactivated handler
This gives you the oppurtunity to cleanup, save state and perform other activities before your application becomes de-activated.
I presume you are commiting your transaction when your application state changes? Does the commit run on the UI thread? i.e. is it blocking? If so, you do not need to do anything else (other than ensure it does not take more than 10 seconds). If your commit is running on a background thread, you will have to ensure that your Deactivated event handler blocks until the commit is complete.
Related
I'm trying to understand how the partitions are executing the events when there is retry policy in place for the event hub and I can't find an answer to what happens to new events when one got an error and is retrying in the same partition in the event hub?
I'm guessing that the one that got an error shouldn't block new ones from executing and when it reties it should be put at the end of the partition, so any other events that got in the partition after that event got an error should be executed in order without any blockage.
Can someone explain what is actually happening in a scenario like that?
Thanks.
It's difficult to answer precisely without some understanding of the application context. The below assumes the current generation of the Azure SDK for .NET, though conceptually the answer will be similar for others.
Retries during publishing are performed within the client, which treats each publishing operation an independent and isolated. When your application calls SendAsync, the client will attempt to publish them and will apply its retry policy in the scope of that call. When the SendAsync call completes, you'll have a deterministic answer of whether the call succeeded or failed.
If the SendAsync call throws, the retry policy has already been applied and either the exception was fatal or all retries were exhausted. The operation is complete and the client is no longer trying to publish those events.
If your application makes a single SendAsync call then, in the majority of cases, it will understand the outcome of the publishing operation and the order of events is preserved. If your application is calling SendAsync concurrently, then it is possible that events will arrive out of order - either due to network latency or retries.
While the majority of the time, the outcome of a call is fully deterministic, some corner cases do exist. For example, if the SendAsync call encounters a timeout, it is ambiguous whether or not the service received the events. The client will retry, which may produce duplicates. If your application sees a TimeoutException surface, then it cannot be sure whether or not the events were successfully published.
Often we have ran into problems with custom TransactionProcessors, when the TP crashes or is unable to connect to the sawtooth Nodes we get a QUEUE_FULL error and from there on all transaction go into PENDING state, including intkey / settings.
Is there a way to remove PENDING transactions and clean up the queue or any cli that can clean up the batches / transactions that are in the queue.
Hyperledger Sawtooth validator attempts at executing transactions in the order they arrive, when there is a call from the Consensus engine. The question is discussing 2 distinct features, happy to help further.
Feature 1: The solution for Transaction Processor crash. It is expected that the transaction processor execute a transaction in the queue when consensus engine asks the validator to build a block. If for some reason the Transaction Processor is unable to process the message, the result of which is still unknown to the validator. So, the validator keeps it in pending state as long as it can be scheduled for execution. Right way for de-queuing it is by executing it. Either put it in a block if it's valid or remove from the queue if it is invalid.
Solution: Check why is the Transaction Processor crashing. This is the code you own. The validator expects one of the following responses - transaction is valid, transaction is invalid, transaction couldn't be evaluated and needs retry.
Feature 2: Removing pending batches from the queue deliberately without telling Hyperledger Sawtooth about it. The pending queue is in memory, it is not saved on disk. The crazy solution thus is to restart that particular instance of validator node.
Note: This may not be possible in certain cases because of the deployment model chosen. Ensure your network and deployment is able to handle node restart scenarios before doing it. There could be bad consequences if the TP crashed on one of the node instead of all. The effect of which will make this particular validator send wrong result to reach the consensus, and depending on the consensus algorithm and the network size the handling of this error may happen differently. The clean solution however is to restart the Transaction Processor.
Hope this answer helps! Happy blockchaining..
System.Transactions.Transaction class has TransactionCompleted event to which I can subscribe.
Is it possible to schedule a continuation after IDbContextTransaction.Commit is called?
Sure, I may call code explicitly after committing a transaction, but I have an overloaded version of SaveChanges where I want to detect externally started transaction and add some actions to run after it was committed.
It's advised to call Abort() to notify the foreground app know that background was canceled intentionally. But how does the foreground app know that?
What's the actually different between Abort() and NotifyComplete()? Does anyone know this?
Your BackgroundAgent should always call NotifyComplete or Abort. It informs the OS that it can free the resources and allocate them to other processes.
NotifyComplete means that the task has completed successfully and the agent will fire in the futere. Abort means that there was an error and agent won't be fired in the future unless you handle this in foreground app. More information you can find here at MSDN. There is also a good example:
The code for the agent is implemented by the application in a class that inherits from BackgroundAgent. When the agent is launched, the operating system calls OnInvoke(ScheduledTask). In this method, the application can determine which type of ScheduledTask it is being run as, and perform the appropriate actions. When the agent has completed its task, it should call NotifyComplete() or Abort() to let the operating system know that it has completed. NotifyComplete should be used if the task was successful. If the agent is unable to perform its task – such as a needed server being unavailable - the agent should call Abort, which causes the IsScheduled property to be set to false. The foreground application can check this property when it is running to determine whether Abort was called.
As it is said - in foreground app you can check ScheduledAction.IsScheduled, whether future invocations of the action are scheduled to occur (if it complited succesfully or failed).
We have main orchestration that has multiple sub orchestration. All root orchestration is of transaction type:none, hence all the sub are also of same nature. Now any exception is caught in a parent scope of main orchestration and we have some steps like logging. The orchestration is activated with a message from App SQL. So every time an exception occurs, say due to something intermittent, like unable to connect to web service. We later go manually re-trigger.
I'm looking at modifying the orch to be self healing, say from exception catch block it reinitialize the messages based on conditions that tell, the issue was intermittent. Something like app issue-null reference, we would not want to resend message, because, the orch is never going to work.
There is a concept called compensation, but that is for transaction based orch- do n steps if any 1 fails, do m other steps(which would do alternate action or cleanup).
The only idea I have is do a look-up based on keywords in exception and decide to resend messages. But I want some1 to challenge this or suggest a better approach
I have always thought that it's better to handle failures offline. So if the orchestration fails, terminate it. But before you terminate, send a message out. This message will contain all the information necessary to recover the message processing if it turns out that there was a temporary problem which caused the failure. The message can be consumed by a "caretaker" process which is responsible for recovery.
This is similar to how the Erlang OTP framework approaches high availability. Processes fail quickly and caretaker processes make sure recovery happens.