I'm using a custom Jersey ExceptionMapper to map my unchecked exceptions into error responses (as described in the documentation). My problem is that the transaction is not rolled back, every DB modification made before the exception is persisted.
The same thing happens if, instead of using the ExceptionMapper, I throw a WebApplicationException.
How can I send an error response to the client preserving the normal behavior (rollback the transaction)?
I found a similar question here, but I don't use spring.
What you can do is use a RequestEventListener to manage the transaction throughout the lifetime of the request. You can listen for RequestEvent.Types, which includes events such as RESOURCE_METHOD_START, ON_EXCEPTION, RESOURCE_METHOD_FINISH, etc. You can begin the transaction at the beginning of the request processing and commit or rollback the transaction depending on if it a successful processing or an exception is thrown.
This is pretty much what Dropwizard does with it's #UnitOfWork. You can see how it is all implemented in this package. Look at the UnitOfWorkApplicationEventListener. You'll see how they implement what I was talking about above.
Related
I have a problem with a caught RuntimeException marked as #ApplicationException(rollback=true) rolling back my database transaction and killing the transaction for the following actions.
#ApplicationException(rollback = true)
public class XyzValidationException extends RuntimeException {
...
}
The process is a batch import process, which is importing mass data in chunks. When the transction is rolled back, the whole chunk is rolled back and after that selected for import again, so the whole thing repeats in an endless loop.
The applicationserver is a JBoss 7.1 and the database is an Oracle 11.2.
I want to catch the exception, mark the import source entity as faulty, log something, and carry on with the rest of the data.
But catching the exception doesn't prevent the transaction from being rolled back. I have read about it now and understood this behavior is normal.
But the thing is, how do you do it then? How do you configure the exception to not roll back when it's caught, and to still do a rollback, when it's uncaught?
I could set the exceptions' annotation to #ApplicationException(rollback=false), but then i would prevent a rollback in a situation, where the exception is thrown an not caught, right?
In other processes, a Rollback could be sensible, when this exception is thrown. There, I would just not want to catch ist.
Does anyone have an idea, how I could achieve this?
I have tried already to change the exception to a checked exception (... extends Exception) and left the annotation with rollback=true.
It didnt change the behavior (I was thinkig/hoping that the rollback=true would maybe just take effect on an uncaught exception, and do the trick in my case... but no)
Then I tried the checked exception with rollback=false, and as expected, it did the trick. But as described already, I don't want to deactivate the rollback completely... only when the exception is caught.
And, if possible, I'd like to stick to the RuntimeException, as we have this 'Policy' to use RuntimeExceptions wherever possible, and the necessary throws-declarations would spread across the application...
Thanks in advance...
Frank
You have different ways how to manage this problem.
Use application exceptions. By default, all checked exceptions are application exception (except the RemoteException). In the CMT model, such kinds of exceptions don't cause an automatic rollback. Thus, you can handle occurred exception during processing a chunk and do something staff like log smth without rollback. For rest cases, should use unchecked exceptions which cause an automatic rollback.
If you have some "policy" sticking to unchecked exceptions in your code. You can declare runtime exception like XyzValidationException and annotate it with #ApplicationException(rollback = true), so in the case where it is thrown the transaction won't be rollbacked. For all other code, where there is necessary to make rollback you can use RuntimeException (which has rollback = false by default).
Have a look at CDI if it's possible in your project. It provides #Transactional which includes such properties as rollbackOn and dontRollbackOn.
I have a strange issue which is causing a serious double-booking problem for us.
We have an MQ.NET code written in C# running on a Windows box that has MQ Client v 7.5. The code is placing messages on the MQ queue. Once in a while the "put" operation works and the message is placed on the, but the MQException is still thrown with Error Code 2009.
In this case, the program assumes that the put operation failed and places the same message on the queue again, which is not a desirable scenario. The assumption is that if the "put" resulted in MQException the operation has failed. Any idea how to avoid this issue from happening? See the client code below.
queue = queueManager.AccessQueue(queueName, MQC.MQOO_OUTPUT + MQC.MQOO_FAIL_IF_QUIESCING);
queueMessage = new MQMessage();
queueMessage.CharacterSet = 1208;
var utf8Enc = new UTF8Encoding();
byte[] utf8String = Encoding.UTF8.GetBytes(strInputMsg);
queueMessage.WriteBytes(Encoding.UTF8.GetString(utf8String).ToString());
queuePutMessageOptions = new MQPutMessageOptions();
queue.Put(queueMessage, queuePutMessageOptions);
Exception:
MQ Reason code: 2009, Exception: Error in the application.
StackTrace: at IBM.WMQ.MQBase.throwNewMQException()
at IBM.WMQ.MQDestination.Open(MQObjectDescriptor od)
at IBM.WMQ.MQQueue..ctor(MQQueueManager qMgr, String queueName, Int32 openOptions, String queueManagerName, String dynamicQueueName, String alternateUserId)
at IBM.WMQ.MQQueueManager.AccessQueue(String queueName, Int32 openOptions, String queueManagerName, String dynamicQueueName, String alternateUserId)
at IBM.WMQ.MQQueueManager.AccessQueue(String queueName, Int32 openOptions)
There is always an ambiguity of outcomes when using any async messaging over the network. Consider the following steps in the API call:
The client sends the API call to the server.
The server executes the API call.
The result is returned to the client.
Let's say the connection is lost prior or during #1 above. The application gets the 2009 and the message is never sent.
But what if the connection is lost after #1? The outcome of #2 cannot possibly be returned to the calling application. Whether the PUT succeeded or failed, it always gets back a 2009. Maybe the message was sent and maybe it wasn't. The application probably should take the conservative option, assume it wasn't sent, then resend it. This results in duplicate messages.
Worse is if the application is getting the message. When the channel agent successfully gets the message and can't return it to the client then that message is irretrievably lost. Since the application didn't specify syncpoint, it wasn't MQ that lost the message but rather the application.
This is intrinsic to all types of async messaging. So much so that the JMS 1.1 specification specifically addresses it in 4.4.13 Duplicate Production of Messages which states that:
If a failure occurs between the time a client commits its work on a
Session and the commit method returns, the client cannot determine if
the transaction was committed or rolled back. The same ambiguity
exists when a failure occurs between the non-transactional send of a
PERSISTENT message and the return from the sending method.
It is up to a JMS application to deal with this ambiguity. In some
cases, this may cause a client to produce functionally duplicate
messages.
A message that is redelivered due to session recovery is not
considered a duplicate message.
This can be addressed in part by using syncpoint. Any PUT or GET under syncpoint will be rolled back if the call fails. The application can safely assume that it needs to PUT or GET the message again and no dupes or lost messages will result.
However, there is still the possibility that 2009 will be returned on the COMMIT. At this point you do not know whether the transaction completed or not. If it is 2-phase commit (XA) the transaction manager will reconcile the outcome correctly. But if it is 1-Phase commit, then you are back to not knowing whether the call succeeded or failed.
In the case that the app got a message under syncpoint, it will at least have either been processed or rolled back. This completely eliminates the possibility of losing persistent messages due to ambiguous outcomes. However if the app received a message and gets 2009 on the COMMIT then it may receive the same message again, depending on whether the connection failure occurred in #1 or #3 in the steps above. Similarly, a 2009 when committing a PUT can only be dealt with by retrying the PUT. This also potentially results in dupe messages.
So, short of using XA, any async messaging faces the possibility of duplicate messages due to connection exception and recovery. TCP/IP has become so reliable since MQ was invented that most applications ignore this architectural constraint without detrimental effects. Although that increased reliability in the network makes it less risky to design apps that don't gracefully handle dupes, it doesn't actually address the underlying architectural constraint. That can only be done in code, XA being one example of that. Many apps are written to gracefully handle dupe messages and do not need XA to address this problem.
Note: Paul Clarke (the guy who wrote much of the MQ channel code) is quick to point out that the ambiguity exists when using bindings mode connections. In 20 years of using WMQ I have yet to see a 2009 on a bindings mode connection but he says the shorter path to the QMgr doesn't eliminate the underlying architectural constraint any more so than does the reliable network.)
We have main orchestration that has multiple sub orchestration. All root orchestration is of transaction type:none, hence all the sub are also of same nature. Now any exception is caught in a parent scope of main orchestration and we have some steps like logging. The orchestration is activated with a message from App SQL. So every time an exception occurs, say due to something intermittent, like unable to connect to web service. We later go manually re-trigger.
I'm looking at modifying the orch to be self healing, say from exception catch block it reinitialize the messages based on conditions that tell, the issue was intermittent. Something like app issue-null reference, we would not want to resend message, because, the orch is never going to work.
There is a concept called compensation, but that is for transaction based orch- do n steps if any 1 fails, do m other steps(which would do alternate action or cleanup).
The only idea I have is do a look-up based on keywords in exception and decide to resend messages. But I want some1 to challenge this or suggest a better approach
I have always thought that it's better to handle failures offline. So if the orchestration fails, terminate it. But before you terminate, send a message out. This message will contain all the information necessary to recover the message processing if it turns out that there was a temporary problem which caused the failure. The message can be consumed by a "caretaker" process which is responsible for recovery.
This is similar to how the Erlang OTP framework approaches high availability. Processes fail quickly and caretaker processes make sure recovery happens.
I have built several Mule processes that consume messages from jms queues (ActiveMQ). Whenever a Mule component throws an exception, the transaction that consumes the messages rollback and the message gets redelivered to the original queue. After a few tries, it will be sent to a dead letter queue (DLQ.queuName).
We have this working OK, but we are missing the exception thrown, either the first one or the last one, we don't care (it'll probably be the same). This is something that can be done on other brokers (like WebLogic JMS), but I've been struggling with this for a while to no avail.
Does anybody know if this is something that can be configured or do I need to build a specific Mule exception handler or policy for ActiveMQ.
TIA,
Martin
That exception is lost in ActiveMQ at the moment (don't know about Mule) but it is reported to the log as error.
It would make a good enhancement, remembering the string form of the exception in the ActiveMQConsumer and passing it back to the broker with the poison Ack that forces it to go to
the DLQ. In that way, it could be remembered as a message property in the resulting DLQ message.
How would you like to handle the exception, have it reported to a connection exception listener or have it recorded in the DLQ message?
What are the best practices for exceptions over remote methods?
I'm sure that you need to handle all exceptions at the level of a remote method implementation, because you need to log it on the server side. But what should you do afterwards?
Should you wrap the exception in a RemoteException (java) and throw it to the client? This would mean that the client would have to import all exceptions that could be thrown. Would it be better to throw a new custom exception with fewer details? Because the client won't need to know all the details of what went wrong. What should you log on the client? I've even heard of using return codes(for efficiency maybe?) to tell the caller about what happened.
The important thing to keep in mind, is that the client must be informed of what went wrong. A generic answer of "Something failed" or no notification at all is unacceptable. And what about runtime (unchecked) exceptions?
It seems like you want to be able to differentiate if the failure was due to a system failure (e.g. a service or machine is down) or a business logic failure (e.g. the user does not exist).
I'd recommend wrapping all system exceptions from the RMI call with your own custom exception. You can still maintain the information in the exception by passing it to your custom exception as the cause (this is possible in Java, not sure about other languages). That way client only need to know how to handle the one exception in the cause of system failure. Whether this custom exception is checked or runtime is up for debate (probably depends on your project standards). I would definitely log this type of failure.
Business type failures can be represented as either a separate exception or some type of default (or null) response object. I would attempt to recover (i.e. take some alternative action) from this type of failure and log only if the recovery fails.
In past projects we'd catch all service layer (tier) exceptions at the very top of the layer, passing the application specific error codes/information to the UI via DTO's/VO's. It's a simple approach in that there's an established pattern of all error handling happening in the same place for each service instead of scattered about the service and UI layers.
Then all the UI has to do is inspect the DTO/VO for a flag (hasError?) and display the error message(s), it doesn't have to know nor care what the actual exception was.
I would always log the exception within my application (at the server side as defined in your question).
I would then throw an exception, to be caught by the client. If the caller could take corrective action to prevent the exception then I would ensure that the exception contained this information (e.g. DateTime argName must not be in the past). If the error was caused by some outage of a third party system then I might pass this information up the call stack to the caller.
If, however, the exception was essentially caused by a bug in my system then I would structure my exception handling such that a non-informative exception message (e.g. General failure) was used.
Here's what I did. Every Remote Method implementation catches all Exceptions on the server side and logs them. Then they are wrapped in a Custom Exception, which will contain a description of the problem. This description must be useful to the client, so it won't contain all the details of the caught Exception, because the client doesn't need them. They have already been logged on the server side. Now, on the client, these Exceptions can be handled how the user wishes.
Why I chose using Exceptions and not return codes is because of one very important drawback of return codes: you can't throw them to higher levels without some effort. This means you have to check for an error right after the call and handle it there. But this may not be what I want.