Throttling Mule Messages in Mule - mysql

I've a polling process in Mule that queries a MySQL database every 30 seconds and sends an email to a recipient. How do I limit to sending just 1 email regardless of the polling cycle whether it be 30 seconds or 15 seconds? I'm open to a counter in the mysql db as well if that's an option.
Thank you.

Write a condition which will only send an email if emailSentFlag == false.
Use choice router to create the condition, and objectstore to hold the emailSentFlag value.
<flow...>
....
<objectstore:retrieve config-ref="ObjectStore__Configuration" key="emailSentFlag" defaultValue-ref="#[false]" targetProperty="flowVars.emailSentFlag" doc:name="retrieve emailSentFlag"/>
<choice doc:name="IsEmailSent?">
<when expression="#[flowVars.emailSentFlag == true]">
<logger level="INFO" doc:name="Log Email Already Sent"/>
</when>
<otherwise>
<smtp:outbound-endpoint host="" user="" password="" to="" from="" subject="test" cc="" bcc="" replyTo="" responseTimeout="10000" ref="Gmail" doc:name="SMTP" connector-ref="Gmail"/>
<objectstore:store config-ref="ObjectStore__Configuration" key="emailSentFlag" value-ref="#[true]" doc:name="store emailSentFlag"/>
</otherwise>
</choice>
</flow>
Also explore the TTL and Persistence feature of objectstore, it could be useful to you.
Cheers

You could use queueing(JMS) for your use case, before sending the "data" to jms would add a delay. You could do this like this
<message-properties-transformer overwrite="true" doc:name="Add DELAY in sending the Response to Queue">
<add-message-property key="Content_Type" value="application/json"/>
<add-message-property key="AMQ_SCHEDULED_DELAY" value="${aupost.retry.timeout}"/>
</message-properties-transformer>
Then add jms consumer to consume the message and send the email accordingly.

Do you have any sample flow to show? You may be able to use collection/message aggregators but looking at flow first would help to suggest.

You can put a vm queue in the end of your flow where the poller picks the data from the sql database.
In other flow, invoke the vm queue using a mule requester as an inbound connector inside a poll then set whatever frequency you want for the mail in the poll frequency using a cron expression or fixed-scheduler. Something like the below code:-
<flow name="db_poll">
<poll doc:name="Poll">
<db:no-operation-selected config-ref="" doc:name="Database"/>
</poll>
<logger message="invoking the database in the poll.. every 30 secs" level="INFO" doc:name="Logger"/>
<vm:outbound-endpoint exchange-pattern="one-way" path="email_queue" connector-ref="VMformail" doc:name="VM"/>
</flow>
<flow name="email_poll">
<poll doc:name="Poll">
<fixed-frequency-scheduler frequency="1" timeUnit="DAYS"/>
<mulerequester:request-collection resource="VMformail" timeout="100000" doc:name="Mule Requester"/>
</poll>
<logger message="send an email" level="INFO" doc:name="Logger"/>
<smtp:outbound-endpoint host="localhost" responseTimeout="10000" doc:name="SMTP"/>
</flow>

Related

Azure Api Manager cache-remove-value policy not removing the cache item

I am caching certain values in my azure api manager policy and in certain cases remove the item to clean up the cache and retrieve the value back from the api.
Based on my experience, even after I remove the value using the cache-remove-value policy, my next api call still finds the value in the cache. Here is a sample code:
<cache-store-value key="Key123" value="123" duration="300" />
<cache-lookup-value key="Key123" variable-name="CacheVariable" />
<cache-remove-value key="Key123" />
<cache-lookup-value key="Key123" default-value="empty" variable-name="CacheVariable2" />
<return-response>
<set-status code="504" reason="" />
<set-body>#(context.Variables.GetValueOrDefault<string>("CacheVariable2"))</set-body>
</return-response>
This code basically returns empty or "123" in the body based on if the cache item with key Key123 was found after being removed or not. This always returns the value of the cached item, "123".
Did anyone experienced this issue or found a way to clean up the cache?
If I continously check in a Retry, I can see that the item is sometimes cleaned after 2 seconds, sometimes 1 minute. I think the delete call is an async or queued call in the background so that we can't really be sure if it's cleaned or not without continuously checking.
UPDATE:
As an actual solution for now, instead of deleting, I actually update the cache item with 1 second duration and a dirty value.
This happens because cache removal request is asynchronous in regards to request processing pipeline, i.e. APIM does not wait for cache item to be removed before continuing with request, thus it is possible to still retrieve it right after removal request since it has not been sent yet.
Updated based on your scenario: why don't you try something like this then:
<policies>
<inbound>
<base />
</inbound>
<backend>
<retry condition="#(context.Response.StatusCode == 200)" count="10" interval="1">
<choose>
<when condition="#(context.Variables.GetValueOrDefault("calledOnce", false))">
<send-request mode="new" response-variable-name="response">
<set-url>https://EXTERNAL-SERVICE-URL</set-url>
<set-method>GET</set-method>
</send-request>
<cache-store-value key="externalResponse" value="EXPRESSION-TO-EXTRACT-DATA" duration="300" />
<!--... or even store whole response ...-->
<cache-store-value key="externalResponse" value="#((IResponse)context.Variables["response"])" duration="300" />
</when>
<otherwise>
<cache-lookup-value key="externalResponse" variable-name="externalResponse" />
<choose>
<when condition="#(context.Variables.ContainsKey("externalResponse"))">
<!-- Do something with cached data -->
</when>
<otherwise>
<!-- Call extenal service and store in cache again -->
</otherwise>
</choose>
<set-variable name="calledOnce" value="#(true)" />
</otherwise>
</choose>
<forward-request />
</retry>
</backend>
<outbound>
<base />
</outbound>

Logger level setting for entire package hierarchy?

Trying this with logback suggests you can't set a level for an entire hierarchy. In other words, you can't specify something like:
<logger name="com.company.app.module.**" level="ERROR"/>
but instead you must specify:
<logger name="com.company.app.module.a" level="ERROR"/>
<logger name="com.company.app.module.a.b" level="ERROR"/>
<logger name="com.company.app.module.a.b.c" level="ERROR"/>
Is there no shorthand for an entire subpackage hierarchy?
I suggest that you read the manual and have a look at the example configuration. You specify a "hierarchy" without wildcard characters. Example
<logger name="com.company.app.module" level="ERROR"/>
<logger name="com.company.app.module.a" level="DEBUG"/>
<logger name="com.company.app.module.a.b" level="INFO"/>
The most specific logger wins. Effective level for com.company.app.module.a.b and below will be INFO. For com.company.app.module.a and below it will be DEBUG, except for com.company.app.module.a.b. And so on...

Same Appender log into 2 different files with Log4J2

I would like to define 1 single Appender in my log4j2.xml configuration file, and using the magic of the Properties Substitution of Log4J2, be able to somehow log into 2 different files.
I imagine the Appender would look something like:
<RollingFile name="Rolling-${filename}" fileName="${filename}" filePattern="${filename}.%i.log.gz">
<PatternLayout>
<pattern>%d %p %c{1.} [%t] %m%n</pattern>
</PatternLayout>
<SizeBasedTriggeringPolicy size="500" />
</RollingFile>
Is there a way for a Logger to use this appender and to pass the filename property?
Or is there a way to pass it when we fetch the Logger with LogManager.getLogger?
Note that those logger may or may not be in the same Thread, it has to support both cases, so I don't think it's possible to use ThreadContext nor System properties.
The closest thing I can think of is RoutingAppender. RoutingAppender allows the log file to be dynamically selected based on values in some lookup. A popular built-in lookup is the ThreadContext map (see the example on the FAQ page), but you can create a custom lookup. Example code:
ThreadContext.put("ROUTINGKEY", "foo");
logger.debug("This message gets sent to route foo");
// Do some work, including logging by various loggers.
// All logging done in this thread is sent to foo.
// Other threads can also log to foo at the same time by setting ROUTINGKEY=foo.
logger.debug("... and we are done");
ThreadContext.remove("ROUTINGKEY"); // this thread no longer logs to foo
Example config that creates log files on the fly:
<Routing name="Routing">
<Routes pattern="$${ctx:ROUTINGKEY}">
<!-- This route is chosen if ThreadContext has a value for ROUTINGKEY.
The value dynamically determines the name of the log file. -->
<Route>
<RollingFile name="Rolling-${ctx:ROUTINGKEY}" fileName="logs/other-${ctx:ROUTINGKEY}.log"
filePattern="./logs/${date:yyyy-MM}/${ctx:ROUTINGKEY}-other-%d{yyyy-MM-dd}-%i.log.gz">
<PatternLayout>
<pattern>%d{ISO8601} [%t] %p %c{3} - %m%n</pattern>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy interval="6" modulate="true" />
<SizeBasedTriggeringPolicy size="10 MB" />
</Policies>
</RollingFile>
</Route>
</Routes>
<!-- This route is chosen if ThreadContext has no value for key ROUTINGKEY. -->
<Route key="$${ctx:ROUTINGKEY}">
<RollingFile name="Rolling-default" fileName="logs/default.log"
filePattern="./logs/${date:yyyy-MM}/default-%d{yyyy-MM-dd}-%i.log.gz">
<PatternLayout>
<pattern>%d{ISO8601} [%t] %p %c{3} - %m%n</pattern>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy interval="6" modulate="true" />
<SizeBasedTriggeringPolicy size="10 MB" />
</Policies>
</RollingFile>
</Route>
</Routing>
An alternative is to configure multiple loggers, each pointing to a separate appender (with additivity="false"). This allows your application to control the destination file by obtaining a logger by its name. However, in that case you would need to configure separate appenders so this does not fulfill your requirement, I mention it for completeness.
I am using the logger name to pass arguments to the appender.
It's hacky but it works:
LogManager.getLogger("com.company.test.Test.logto.xyz.log")
A custom StrLookup is necessary to extract the filename from the logger name.

Unable to get flow variable inside exception-strategy

Unable to get flow variable inside exception. In below code I am trying to use centirofilename inside default-exception-strategy. It gives Exception
<set-variable value="#[xpath('//soap/filename/text()').text]"
variableName="centirofilename" doc:name="Variable" />
<default-exception-strategy>
<rollback-transaction exception-pattern="*" /> <!-- [1] -->
<processor-chain>
<logger level="INFO" category="ProTSP Logger"
message="#[centirofilename]" doc:name="Logger" />
</processor-chain>
</default-exception-strategy>
You should be using rollback-exception-strategy instead of the one you're using. The exception strategy you're using is legacy and it's use is not recommended.

How to split up the url?

I use Mule Studio.
When I run for example localhost:8080/?first=value1&second=value2 I would like to get two variables and their values:
first: value1
second: value2
I use splitter to delete first '/' like this:
[regex('/(.*?)', message.payload)]
but now I get:
?first=value1&second=value2
You can extract the parameters by using message.inboundProperties['parameter'].
For example:
<logger level="WARN" message="#[message.inboundProperties['first']]" />
<logger level="WARN" message="#[message.inboundProperties['second']]" />
You may extract the parameters in three ways:
Directly from the message's inbound properties
As a Map by accessing the inbound property keyed with http.query.params
Using http:body-to-parameter-map-transformer and have the map conveniently placed on the payload.
Consider running the following flow:
<flow name="mule-configFlow1" doc:name="mule-configFlow1">
<http:inbound-endpoint address="http://localhost:8082/app" />
<http:body-to-parameter-map-transformer />
<logger level="ERROR" />
<logger level="ERROR" message="Payload is: #[payload]" />
<json:object-to-json-transformer />
</flow>