Roll back exception strategy for Mule request response VM - exception

I am using mule request response VM and need the rollback messages to be reprocessed by VM in case of some exceptions, say connection issues. However, the rollback exception strategy does not appear to work when I use exchange pattern as request response for VM. The reason I used request response is I need way to know when all my VM messages have been processed and initiate another task after that. I think the behavior is that when there is an exception, the rollback strategy catches the exception and probably commits it. I do not see it trying the redeliver the message back to VM. It does work good when the exchange pattern is one-way.
<flow name="vmtransactionrollbackFlow">
<http:listener config-ref="HTTP_Listener_Configuration" path="/myvm" doc:name="HTTP"/>
<set-payload value="Dummy list payload" doc:name="Set Payload"/>
<foreach doc:name="For Each">
<vm:outbound-endpoint exchange-pattern="request-response" path="myvm" connector-ref="VM" doc:name="VM">
<vm:transaction action="ALWAYS_BEGIN"/>
</vm:outbound-endpoint>
</foreach>
<logger message="DO SOMETHING ONLY AFTER ALL MESSAGES IN VM ARE PROCESSED" level="INFO" doc:name="Logger"/>
</flow>
<flow name="vmtransactionrollbackFlow1">
<vm:inbound-endpoint exchange-pattern="request-response" path="myvm" connector-ref="VM" doc:name="VM">
<vm:transaction action="BEGIN_OR_JOIN"/>
</vm:inbound-endpoint>
<scripting:component doc:name="Groovy">
<scripting:script engine="Groovy"><![CDATA[throw new java.lang.Exception("Test exception");]]></scripting:script>
</scripting:component>
<rollback-exception-strategy maxRedeliveryAttempts="3" doc:name="Rollback Exception Strategy">
<logger message="Rolling back #[payload]" level="INFO" doc:name="Logger"/>
<on-redelivery-attempts-exceeded>
<logger message="Redelivery exhausted:#[payload]" level="INFO" doc:name="Logger"/>
</on-redelivery-attempts-exceeded>
</rollback-exception-strategy>
</flow>

Yes I ran into a similar problem, when the VM outbound uses a request-response exchange pattern it behaves more like flow-ref with no "queue" involved per say and hence no redelivery mechanism.
So if the VM's are configured as one-way and the flow processing strategy is synchronous (VM inbound flow), then the redelivery does kick in.
To achieve what you want you could use until-successful scope within the vmtransactionrollbackFlow1 flow, especially for the case of intermittent connection losses this is actually the recommended approach. In which you do not need transactions at all.
Do let us know how it goes, and if you found some other work around.

Related

Mule Custom Exception Class not catching some exceptions

I have a Mule flow which obtains an oauth token from a service which may throw a fault. However the exception is not caught in the flow even though there is a catch exception strategy at the end. Can someone explain why the exception is not caught? When I post XML via SOAP UI using an invalid token to trigger an exception, the request gets to the flow, but the exception is not caught. Instead I get a stack trace indicating an invalid token. Here is the flow:
<flow name="order-query">
<http:listener config-ref="HTTP_config"
path="order/request" doc:name="HTTP" />
<flow-ref name="oauth-token-service">
<cxf:jaxws-service doc:name="SPOP SOAP"
serviceClass="o.x.p.SpopWS">
<cxf:inInterceptors>
<spring:ref bean="HeaderInInterceptor" />
</cxf:inInterceptors>
<cxf:outInterceptors>
<spring:ref bean="faultOutInterceptor" />
<spring:ref bean="headerOutInterceptor" />
</cxf:outInterceptors>
<cxf:outFaultInterceptors>
<spring:ref bean="OutSoapFaultInterceptor" />
</cxf:outFaultInterceptors>
</cxf:jaxws-service>
<scripting:transformer>
<scripting:script engine="python">
...
</scripting:script>
<scripting:transformer>
<catch-exception-strategy>
<logger level="INFO" message=" Should be handled here #[payload]"/>
</catch-exception-strategy>
</flow>
are you sure the exception is not catched? the default behavior of catch-exception-strategy is to log the catched exception. This is the reason why you see the stack trace in logs.
for mule 3.8 and above: you can disable/enable this behavior either with a checkbox (Log Exceptions):
or in XML (logException=false):
<catch-exception-strategy logException="false" doc:name="Catch Exception Strategy">
<logger level="INFO" doc:name="Logger"/>
</catch-exception-strategy>
for mule 3.7: take a look here: https://stackoverflow.com/a/42181054/804521

Mule always use the default exception handler

I can not catch a basic org.mule exception triggered by a Poller component, Mule is still using the default mechanism (tried both Global or Local)
In case the below exception is thrown I would like to print something personal in the Log it self just for testing purpose, further enhancements will occur once this is working properly.
Message : Failed to move file "C:\Users\Administrator\Desktop\shared_folder\12131551.XML" to "C:\Users\Administrator\Desktop\archive\backup\12131551.XML.backup". The file might already exist.
Code : MULE_ERROR-3
Exception stack is:
1. Failed to move file "C:\Users\Administrator\Desktop\shared_folder\12131551.XML" to "C:\Users\Administrator\Desktop\archive\backup\12131551.XML.backup". The file might already exist. (org.mule.api.DefaultMuleException)
org.mule.transport.file.FileMessageReceiver:553 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/DefaultMuleException.html)
Root Exception stack trace:
org.mule.api.DefaultMuleException: Failed to move file "C:\Users\Administrator\Desktop\shared_folder\12131551.XML" to "C:\Users\Administrator\Desktop\archive\backup\12131551.XML.backup". The file might already exist.
at org.mule.transport.file.FileMessageReceiver.moveAndDelete(FileMessageReceiver.java:553)
at org.mule.transport.file.FileMessageReceiver.access$400(FileMessageReceiver.java:62)
at org.mule.transport.file.FileMessageReceiver$2.process(FileMessageReceiver.java:414)
+ 3 more (set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
This is my PoC
<file:connector name="XML_poller" autoDelete="false" streaming="false" validateConnections="true" pollingFrequency="5000" doc:name="File"/>
<file:connector name="output" doc:name="File" autoDelete="false" streaming="false" validateConnections="true"/>
<flow name="exceptionStrategyExample" doc:name="exceptionStrategyExample">
<file:inbound-endpoint connector-ref="XML_poller" path="C:\Users\Administrator\Desktop\shared_folder" moveToDirectory="C:\Users\Administrator\Desktop\archive\backup"
moveToPattern="#[header:originalFilename].backup" doc:name="Poller" responseTimeout="10000">
<file:filename-wildcard-filter pattern="*.xml" caseSensitive="false"/>
</file:inbound-endpoint>
<http:outbound-endpoint exchange-pattern="request-response" host="localhost" port="8081" method="POST" doc:name="HTTP"/>
<choice-exception-strategy>
<rollback-exception-strategy
when="exception.causedBy(java.lang.IllegalStateException)"
maxRedeliveryAttempts="3">
<logger message="Retrying shipping cost calc." level="WARN" />
<on-redelivery-attempts-exceeded>
<logger message="Too many retries shipping cost calc."
level="WARN" />
<set-payload value="Error: #[exception.summaryMessage]"/>
</on-redelivery-attempts-exceeded>
</rollback-exception-strategy>
<catch-exception-strategy doc:name="Catch Exception Strategy" when="exception.causedBy(org.mule.*)">
<logger message="************TEST***************" level="INFO" doc:name="Logger"/>
</catch-exception-strategy>
</choice-exception-strategy>
</flow>
It is simply not doing anything.... Any hints ?
I think this is a case of a System Exception, where no message is created that could be caught by the exception handling components (see Mule docs for System vs Messaging exceptions). You could try either writing a custom message receiver overriding the processFile method (see this post for inspiration), or check the existence of duplicate files manually and use a separate file:outbound-endpoint for writing the file.
I have found a workaround. Insert a processor chain element before file connector and put a dummy set-payload. In this way you will always create a message and then it ll not use DefaultExceptionStrategy for handling the errors.

logback - remapping a log level for a specific logger

I have a logback configuration that has an appender with a threshold filter:
<appender name="SYSLOG" class="ch.qos.logback.classic.net.SyslogAppender">
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>INFO</level>
</filter>
...
</appender>
This ensures that only info and higher (warn, error) get logged to syslog. However, one of the 3rd party libraries we use is logging a particular event at DEBUG, and I would like to log this event to syslog. The first approach I had in mind was to try remap the log level in the logger, but am not sure if this is possible? Something like:
<logger name="akka.some.Thing" level="DEBUG" logAs="INFO">
<appender-ref ref="SYSLOG" />
</logger>
obviously, the "logAs" parameter doesn't exist, so I can't do that. What would be the best approach to logging akka.some.Thing to the SYSLOG appender while leaving the filter in place for other loggers?
The other approach would be to create a 2nd appender called SYSLOG2 that doesn't have the filter in place and set the specific logger to use that, but was wondering if there was a way to configure logback with just 1 SYSLOG appender...
Thanks,
I know this is an old question - but it is actually possible to do what the OP wants to do with a single SyslogAppender.
If others are searching for an example of how to remap you can take a look at the org.springframework.boot.logging.logback.LevelRemappingAppender class.
With that appender it is possible to both remap what appender is finally used for the log event, and it is also possible to remap the level that is used for the final log event - e.g. by changing a DEBUG level into an INFO level.
Usage example in logback config file (taken from https://github.com/spring-projects/spring-boot/blob/master/spring-boot/src/main/resources/org/springframework/boot/logging/logback/defaults.xml):
<appender name="DEBUG_LEVEL_REMAPPER" class="org.springframework.boot.logging.logback.LevelRemappingAppender">
<!-- Optional: specify the destination logger the event ends up in -->
<destinationLogger>org.springframework.boot</destinationLogger>
<!-- Optional: specify log level remapping -->
<remapLevels>INFO->DEBUG,ERROR->WARN</remapLevels>
</appender>
<logger name="org.thymeleaf" additivity="false">
<appender-ref ref="DEBUG_LEVEL_REMAPPER"/>
</logger>
Note that remapping to a specific destination logger can make it harder to find the source code of the original log event - so use it with care.
What you can do, is writing a second logger + appender with the same output:
<appender name="SYSLOG-2" class="ch.qos.logback.classic.net.SyslogAppender">
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>DEBUG</level>
</filter>
...
</appender>
<logger name="akka.some.Thing" level="DEBUG">
<appender-ref ref="SYSLOG-2" />
</logger>
This will add your specific DEBUG tasks to the same output.

Mule Functional Tests - totally confused

We have a Mule application with 6 or seven flows with around 5 components per flow.
Here is the setup.
We send JMS requests to an ActiveMQ Queue. Mule listens to that. Based on content of the message we forward that to corresponding flows.
<flow name="MyAPPAutomationFlow" doc:name="MyAPPAutomationFlow">
<composite-source>
<jms:inbound-endpoint queue="MyAPPOrderQ" connector-ref="Active_MQ_1" doc:name="AMQ1 Inbound Endpoint"/>
<jms:inbound-endpoint queue="MyAPPOrderQ" connector-ref="Active_MQ_2" doc:name="AMQ2 Inbound Endpoint"/>
</composite-source>
<choice doc:name="Choice">
<when expression="payload.getProcessOrder().getOrderType().toString().equals("ANC")" evaluator="groovy">
<processor-chain>
<flow-ref name="ProcessOneFLow" doc:name="Go to ProcessOneFLow"/>
</processor-chain>
</when>
<when....
...........
</choice>
</flow>
<flow name="ProcessOneFLow" doc:name="ProcessOneFLow">
<vm:inbound-endpoint exchange-pattern="one-way" path="ProcessOneFLow" responseTimeout="10000" mimeType="text/xml" doc:name="New Process Order"/>
<component doc:name="Create A">
<spring-object bean="createA"/>
</component>
<component doc:name="Create B">
<spring-object bean="createB"/>
</component>
<component doc:name="Create C">
<spring-object bean="createC"/>
</component>
<component doc:name="Create D">
<spring-object bean="createD"/>
</component>
</flow>
<spring:beans>
<spring:import resource="classpath:spring/service.xml"/>
<spring:bean id="createA" name="createA" class="my.app.components.CreateAService"/>
<spring:bean id="createB" name="createB" class="my.app.components.CreateBService"/>
<spring:bean id="createC" name="createC" class="my.app.components.CreateCService"/>
<spring:bean id="createD" name="createD" class="my.app.components.CreateDService"/>
......
......
</spring:beans>
Now I am not sure how I can write Functional tests with them.
I went through the Functional Testing documentation in Mule website but there they have very simple tests.
Is Functional Testing not supposed to make actual backend updates using DAO or Service layers or is it just an extension of Unit tests where you mock up service layer?
I was of the idea - it can take in a request and use the inmemory Mule server to pass the request-response from one component to another in a flow.
Also kindly note there is no Outbound endpoint for any of our flows as they are mostly Fire and Forget type flows and status updates are managed by the DB updates the components do.
Also why do I need to create separate mule config xml files for tests? If I am not testing the flow xml that will actually be deployed on Live what's the point of this testing? I f I am creating separate xml configs just for tests that somewhat defeats the purpose to me...
Can some expert kindly elucidate a bit more and point to example tests similar to the ones we are using.
PS: the components inside Mule are dependent on external systems like webservices, databases etc. For Functional tests do we need to have those running or are we supposed to mock out those services/Db Access?
Functional testing your Mule application is no different from testing any application that relies on external resources, like databases or JMS brokers, so you need to use the same techniques you would do with a standard application.
Usually this means stubbing the resources out with in-memory implementations, like HSQLDB for databases or a transient ActiveMQ in-memory broker for JMS. For a Mule application, this implies modularizing your configuration so "live" transports are defined in a separate file, which you replace with one that contains the in-memory variants at testing time.
To validate Mule had the correct interaction with the resource, you can either read the resource directly using its Java client (for example JDBC or JMS), which is good if you want to ensure that purely non-Mule clients have no issue reading what Mule has dispatched, or use the MuleClient to read from these resources or create flows that consume these resources and pass messages to the <test:component>.
FYI These different techniques are explained and demonstrated in chapter 12 of Mule in Action, second edition.
https://blog.codecentric.de/en/2015/01/mule-esb-testing-part-13-unit-functional-testing/
https://developer.mulesoft.com/docs/display/current/Functional+Testing
Please refer this links
As you can see, it's an ordinary JUnit test extending FunctionalMunitSuite class.
There are two thing we need to do in our test:
Prepare MuleEvent object as an input to our flow. We can do that by using provided testEvent(Object payload) method.
Execute runFlow(String flowName, MuleEvent event) method specifying flow name to test against and event we just created in the first step.

log4net logger configuration

Is it possible to set the logger from configuration. I have a web app
using a framework. The framework is extensible and has the logger.
When I log, currently, the logger is set to the framework class.
Is it possible that I can configure my web app and set the logger for the web app to
loggerForWebApp and the logger for a console app (which is using the
same framework) to loggerForConsoleApp?
In addition to the root logger (which must always be there) you can have named loggers with their own appender-refs and levels.
For instance, you could have something like this:
<root>
....
</root>
<logger name="loggerForWebApp">
<level value="WARN" />
<appender-ref ... />
</logger>
<logger name="loggerForConsoleApp">
<level value="WARN" />
<appender-ref ... />
</logger>
In code, you would summon these loggers by their name:
var log = LogManager.GetLogger("loggerForWebApp");
Most definitely, and this is one of the great things about log4net: it can log out to a wide range of loggers.
For examples of the appenders, see here. Probably the most common one in use is the RollingFileAppender, but the ConsoleAppender can be very handy for console applications. Alternatively the TraceAppender can write out to the standard .NET trace listeners for further redirection (or display in the debug Output window in Visual Studio).
To create your own, implement IAppender.
details to follow