While working on a BPMN model, we encountered an issue concerning exception flows.
Whenever an exception flow was needed, we gave it its own end event. Although BPMN recommend using gateways to merge normal flow with exception flow. I can see no advantage in doing so, only some additional complications.
However, consider the case in which we have a subprocess with an activity X and an attached-to-boundary non-interrupting event. When triggered, a parallel token will be created and put on the branch of the exception flow. After running through exception flow, an end event consumes this token.
I assume this consumption doesn't trigger the upper-level flow(the one containing this particular subprocess) to continue, as there still is a token left in activity X. As this activity is ended and normal flow is executed , this token is consumed as well and the subprocess doesn't contain any token. This will trigger the upper-level flow to continue.
As this is the case, I can't think of any case where the merger of exception flow and normal flow should be necessary. (Except for the one where activities after activity x have to be run in exception flow as well, causing them to be executed multiple times).
I assume this consumption doesn't trigger the upper-level flow(the one containing this particular subprocess) to continue, as there still is a token left in activity X.
This is an accurate statement; the following models have identical semantics (the non-interrupting multiple event trigger is used as a placeholder):
The governing section in the BPMN specification is Section 10.5.3, which requires that "all the tokens that were generated within the process MUST be consumed by an End Event before the Process has been completed."
As this is the case, I can't think of any case where the merger of exception flow and normal flow should be necessary.
One case where the merger may be necessary occurs when the two flows must be merged before a task later in the process can commence. As a simple example, take the following models:
In the model at top, Activity Y may commence as soon as Activity X completes, regardless of whether there are instances of Exceptional activity running in parallel. In the model at bottom, Activity Y cannot start until all instances (if any) of Exceptional activity have completed. If the semantics of the second example are wanted, then a merger of the normal and exception flows are needed.
The last diagram is not inaccurate and, effectively, invalid.
By definition, the exceptional flow cannot form part of the smooth flow. By definition, the exceptional flow cannot be evaluated using IF.
The final diagram (erroneously) introduces a gateway which will, eventually receive two tokens and (erroneously) trigger activity Y, twice.
I would suggest that the language is wrong - IF the exceptional flow must be complete before Y triggers, the author has described an INTERRUPTING BOUNDARY EVENT. Depicting that event solves the modelling problem introduced by poor grammar. Only one token is produced and activity Y is triggered, only once.
Related
Is there anyway that i can limit the calling of apply method to just 1 in the Transaction Processor? By default it is called twice.
I guess your question is based on the log traces you see.
Short answer: apply method also the core business logic in your transaction family is executed once for a input transaction. There is a different reason for you to see the logs appear twice. Well, in reality transaction execution happens and state transitions are defined with respect to the context. Read the long answer for detailed understanding.
Long answer: If you're observing logs, then you may need to go a little deeper into the way Hyperledger Sawtooth works. Let's get started.
Flow of events (at very high level):
Client sends the transaction embed in a batches.
Validator adds all the transactions in the pending queue.
Based on the consensus engine's request, the validator will start creating the block.
For the block creation, a current state context information is passed along with the transaction request to the Transaction Processor. Eventually send to the right Transaction Family's apply method.
The apply method's result either success or failure is recorded. The transaction is removed from the pending queue if is invalid or it is added to the block.
If the response of the apply method is internal error then that is resubmitted.
If a transaction is added to the block. Depending on the consensus algorithm, the created block is broadcasted to all the nodes.
Every node executes the transactions in the arriving block. The node that created the block will also execute. This probably is what you're talking about.
It's very straightforward on HTTP call between microservices to propagate exception to caller/front-end.
But how to propagate exception on event-driven/message queue (ie. RabbitMQ) microservice to the caller/front-end?
I would recommend Cadence Workflow which is much more powerful solution for microservice orchestration and provides exception handling propagation across long running operations out of the box.
It offers a lot of other advantages over using queues for your use case.
Built it exponential retries with unlimited expiration interval
Support for long running heartbeating operations
Ability to implement complex task dependencies. For example to implement chaining of calls or compensation logic in case of unrecoverble failures (SAGA)
Gives complete visibility into current state of the update. For example when using queues all you know if there are some messages in a queue and you need additional DB to track the overall progress. With Cadence every event is recorded.
Ability to cancel an update in flight.
See the presentation that goes over Cadence programming model.
I'm working with the RISC-V specification and have a problem with pending interrupts/exceptions. I'm reading version 1.10 of volume II, published in May 7, 2017.
In section 3.1.14, describing the registers mip and mie it is said that:
Multiple simultaneous interrupts and traps at the same privilege level are handled in the following decreasing priority order: extern interrupts, software interrupts, timer interrupts, then finally any synchronous traps.
Up until that point I thought that exceptions, e.g. a misaligned instruction fetch exception on a JAL/JALR instruction, would be handled immediately by a trap because
a) there is no way to continue executing your stream of instructions and
b) there is no description of how an exception could be pending, i.e. there are no concepts described by the specification that could manage state for exceptions (for example registers like mip but for exceptions).
However, the paragraph cited above indicates something different.
My questions are:
Are there pending exceptions in RISC-V?
If yes, how is it possible that the exception still can be handled after an interrupt was handled and isn't forgotten?
In my option there are pending exceptions in RISCV-V, exactly by the reason you stated. It is a matter of semantics, if two events occur simultaneously, and one is deferred, it must be pending. One must cater for the possibility of an asynchronous event (interrupt) occurring simultaneously with a trap, and (by section 3.1.14) the asynchronous event has priority. Depending on the implementation one does not neccesarely need to save any state in this case, after the interrupt is handled, the instruction that triggers a trap is re-fetched, and duly leads to an exception. In my view section 3.1.14 describes the serialization of asynchronous events.
What are the applications and advantages of explicitly raising exceptions in a program. For example, if we consider Ada language specifically here provides an interface to raise exceptions in the program. Example:
raise <Exception>;
But what are the advantages and application areas where we would need to raise exceptions explicitly?
For example, in a procedure which accepts one of the parameters as string:
function Fixed_Str_To_Chr_Ptr (Source_String : String) return C.Strings.Chars_Ptr is
...
begin
...
-- Check whether source string is of acceptable length
if Source_String'Length <= 100 then
...
else
...
raise Constraint_Error;
end if;
return Ptr;
exception
when Constraint_Error=>
.. Do Something..
end Fixed_Str_To_Chr_Ptr;
Is there any advantage or good practice if I raise an exception in the above function and handle it when the passed string length bound exceeds the tolerable limits? Or a simple If-else handler logic should do the business?
I'll make my 2 cents an answer in order to bundle the various aspects. Let's start with the general question
But what are the advantages and application areas where we would need to raise exceptions explicitly?
There are a few typical reasons for raising exceptions. Most of them are not Ada-specific.
First of all there may be a general design decision to use or not use exceptions. Some general criteria:
Exception handlers may incur a run time cost even if an exception is actually never thrown (cf. e.g. https://gcc.gnu.org/onlinedocs/gnat_ugn/Exception-Handling-Control.html). That may be unacceptable.
Issues of inter-operability with other languages may preclude the use of exceptions, or at least require that none leave the part programmed in Ada.
To a degree the decision is also a matter of taste. A programmer coming from a language without exceptions may feel more confident with a design which just relies on checking return values.
Some programs will benefit from exceptions more than others. If traditional error handling obscures the actual program structure it may be time for exceptions. If, on the other hand, potential errors are few, easily detected and can be handled locally, exceptions may obscure potential execution paths more than handling errors traditionally would.
Once the general decision to use exceptions has been made the problem arises when and when not it is appropriate to raise them in your code. I mentioned one general criteria in my comment. What comes to mind:
Exceptions should not be part of normal, expected program flow (they are called exceptions, not expectations ;-) ). This is partly because the control flow is harder to see and partly because of the potential run time cost.
Errors which can be handled locally don't need exceptions. (It can still be useful to raise one in order to have a uniform error handling though. I'll discuss that below when I get to your code snippet.)
On the other hand, exceptions are great if a function has no idea how to handle errors. This is particularly true for utility and library functions which can be called from a variety of contexts (GUI, console program, embedded, server ...). Exceptions allow the error to propagate up the call chain until somebody can handle it, without any error handling code in the intervening layers.
Some people say that a library should only expose custom exceptions, at least for any anticipated errors. E.g. when an I/O exception occurs, wrap it in a custom exception and explicitly raise that custom exception instead.
Now to your specific code question:
Is there any advantage or good practice if I raise an exception in the above function and handle it when the passed string length bound exceeds the tolerable limits? Or a simple If-else handler logic should do the business?
I don't particularly like that (although I don't find it terrible) because my general argument above ("if you can handle it locally, don't raise") would indicate that a simple if/else is clearer.1 For example, if the function is long the exception handler will be far away from the error location, so one may wonder where exactly the exception could occur (and finding one raise location is no guarantee that one has found them all, so the reviewer must scrutinize the whole function!).
It depends a bit on the specific circumstances though. Raising an exception may be elegant if an error can happen in several places. For example, if several strings can be too short it may be nice to have a centralized error handling through the exception handler, instead of scattering if/then/elses (nested??) across the function body. The situation is so common that a legitimate case can be made for using goto constructs in languages without exceptions. An exception is then clearly superior.
1But in all reality, how do you handle that error there? Do you have a guaranteed logging facility? What do you return? Does the caller know the result can be invalid? Maybe you should throw and not catch.
There are two problems with the given example:
It's simple enough that control flow doesn't need the exception. That won't always be the case, however, and I'll come back to that in a moment.
Constraint_Error is a spectacularly bad exception to raise, to detect a string length error. The standard exceptions Program_Error, Constraint_Error, Storage_Error ought to be reserved for programming error conditions, and in most circumstances ought to bring down the executable before it can do any damage, with enough debugging information (a stack traceback at the very least) to let you find the mistake and guarantee it never happens again.
It's remarkably satisfying to get a Constraint_Error pointing spookily close to your mistake, instead of whatever undefined behaviour happens much later on... (It's useful to learn how to turn on stack tracebacks, which aren't generally on by default).
Instead, you probably want to define your own String_Size_Error exception, raise that and handle it. Then, anything else in your unshown code that raises Constraint_Error will be properly debugged instead of silently generating a faulty Chars_Ptr.
For a valid use case for raising exceptions, consider a circuit simulator such as SPICE (or a CFD simulator for gas flow, etc). These tools, even when working properly, are prone to failures thanks to numerical problems that happen in matrix computations. (Two terms cancel, producing zero +/- rounding error, which causes infeasibly large numbers or divide-by-zero later on). It's often an iterative approximation, where the error should reduce in each step until it's an acceptably low value. But if a failure occurs, the error term will start growing...
Typically the simulation happens step by step, where each step is a sufficiently small time step, maybe 1 us or 1 ns. The main loop requests a step, and this request is passed to thousands of agents in the simulation representing components in a circuit, or triangles in a CFD mesh.
Any one of those agents may fail to compute a solution, and the cleanest way to handle a failure is to raise an exception, maybe Convergence_Error. There may be thousands of possible points where an exception can be raised.
Testing thousands of return codes would get ugly fast. But with exceptions, the main loop only needs one handler, which takes some corrective action such as reducing the simulation step size and running the step again.
Sanitizing user text input in a browser may be another good use case, closer to the example code.
One word on the runtime cost of exceptions : the Gnat compiler and its RTS supports a "Zero Cost Exception" (ZCX) model - at least for some targets. There's a larger penalty when an exception is raised, as a tradeoff against eliminating the penalty in the normal case. If the penalty matters to you, refer to the documentation to see if it's worthwhile 9or even possible) in your case.
You raise an exception explicitly to control which exception is reported to the user of a subprogram. - Or in some cases just to control the message associated with the raised exception.
In very special cases you may also raise an exception as a program flow control.
Exceptions should stay true to their name, which is to represent exceptional situations.
This is more a general purpose programming question than language specific. I've seen several appraoches to try and catches.
One is you do whatever preprocessing on data you need, call a function with the appropriate arguments and wrap it into a try/catch block.
The other is to simply call a function pass the data and rely on try catches within the function, with the function returning a true/false flag if errors occured.
Third is a combination with a try catch outside the function and inside. However if the functions try catch catches something, it throws out another exception for the try catch block outside the function to catch.
Any thoughts on the pros/cons of these methods for error control or if there is an accepted standard? My googling ninja skills have failed me on finding accurate data on this.
In general, an exception should only be caught if it can actually be handled.
It makes no sense to catch an exception for no purpose other than to log it. The exception is that exceptions should be caught at the "top level" so that it can be logged. All other code should allow exceptions to propagate to the code that will log them.
I think the best way to think about this is in terms of program state. You don't want a failed operation to damage program state. This paper describes the concept of "Exception Safety".
In general, you first need to decide what level of exception safety a function needs to guarantee. The levels are
Basic Guarnantee
Strong Guarantee
NoThrow Guarantee
The basic Guarantee simply means that in the face of an exception or other error, no resources are leaked, the strong guarantee says that the program state is rolled back to before the exception, and nothrow methods never throw exceptions.
I personally use exceptions when an unexpected, runtime failure occurs. Unexpected means to me that such a failure should not occur in the normal course of operations. Runtime means that the error is due to the state of some external component outside of my control, as opposed to due to logic errors on my part. I use ASSERT()'s to catch logic errors, and I use boolean return values for expected errors.
Why? ASSERT isn't compiled into release code, so I don't burden my users with error checking for my own failures. That's what unit tests and ASSERTS are for. Booleans because throwing an exception can give the wrong message. Exceptions can be expensive, too. If I throw exceptions in the normal course of application execution, then I can't use the MS Visual Studio debugger's excellent "Catch on thrown" exception feature, where I can have the debugger break a program at the point that any exception is thrown, rather than the default of only stopping at unhandled (crashing) exceptions.
To see a C++ technique for the basic Guarantee, google "RAII" (Resource Acquisition is Initialiation). It's a technique where you wrap a resource in an object whose constructor allocates the resource and whos destructor frees the resource. Since C++ exceptions unwind the stack, it guarantees that resources are freed in the face of exceptions. You can use this technique to roll back program state in the face of an exception. Just add a "Commit" method to an object, and if an object isn't committed before it is destroyed, run the "Rollback" operation that restores program state in the destructor.
Every "module" in an application is responsible for handling its own input parameters. Normally, you should find issues as soon as possible and not hand garbage to another part of the application and rely on them to be correct. There are exceptions though. Sometimes validating an input parameter basically needs reimplementing what the callee is supposed to do (e.g. parsing an integer) in the caller. In this case, it's usually appropriate to try the operation and see if it works or not. Moreover, there are some operations that you can't predict their success without doing them. For example, you can't reliably check if you can write to a file before writing to it: another process might immediately lock the file after your check.
There are no real hard and fast rules around exception handling that I have encountered, however I have a number of general rules of thumb that I like to apply.
Even if some exceptions are handled at the lower layer of your system make sure there is a catch all exception handler at the entry point of your system (e.g. When you implement a new Thread (i.e. Runnable), Servlet, MessasgeDrivenBean, server socket etc). This is often the best place to make the final decision as how your system should continue (log and retry, exit with error, roll back transaction)
Never throw an execption within a finally block, you will lose the original exception and mask the real problem with an unimportant error.
Apart from that it depends on the function that you are implementing. Are you in a loop, should the rest of the items be retried or the whole list aborted?
If you rethrow an exception avoid logging as it will just add noise to your logs.
I generally consider if as the caller of the method I can use an exception in any way (like, to recover from it by taking a different approach) or if it makes no difference and just went wrong if the exception occurs. So in the former case I'll declare the method to throw the exception while in the latter I'll catch it inside of the method and don't bother the caller with it.
The only question on catching exceptions is "are there multiple strategies for getting something done?"
Some functions can meaningfully catch certain exceptions and try alternative strategies in the event of those known exceptions.
All other exceptions will be thrown.
If there are not any alternative strategies, the exception will be simply thrown.
Rarely do you want a function to catch (and silence) exceptions. An exception means that something is wrong. The application -- as a whole -- should be aware of unhandled exceptions. It should at least log them, and perhaps do even more: shut down or perhaps restart.