Trouble using two different databases in Mule JDBC flow - mysql

I have two JDBC flows in Mule 3.2.0, one using MySQL database and other using SQLServer database.
<mule ...
<spring:bean id="MySQL-jdbcDataSource"
class="org.enhydra.jdbc.standard.StandardDataSource" destroy-method="shutdown">
<spring:property name="driverName"
value="com.mysql.jdbc.Driver" />
<spring:property name="url"
value="jdbc:mysql://host:port/schema" />
</spring:bean>
<jdbc:connector name="MySQL-jdbcConnector"
dataSource-ref="MySQL-jdbcDataSource" pollingFrequency="${MySQL.db.poll}"
transactionPerMessage="false">
<jdbc:query key="read" value="${MySQL.db.jdbc_query}" />
</jdbc:connector>
<flow name="MySQL-flow">
<jdbc:inbound-endpoint queryKey="read"
connector-ref="MySQL-jdbcConnector">
<jdbc:transaction action="ALWAYS_BEGIN"/>
<property key="receiveMessageInTransaction" value="true"/>
</jdbc:inbound-endpoint>
<vm:outbound-endpoint path="path" connector-ref="first-level">
<message-properties-transformer scope="outbound">
<add-message-property key="identifier" value="MySQL"/>
</message-properties-transformer>
<vm:transaction action="NONE"/>
</vm:outbound-endpoint>
</flow>
</mule>
And
<mule ...
<spring:bean id="SQLServer-jdbcDataSource"
class="org.enhydra.jdbc.standard.StandardDataSource" destroy-method="shutdown">
<spring:property name="driverName"
value="com.microsoft.sqlserver.jdbc.SQLServerDriver" />
<spring:property name="url"
value="jdbc:sqlserver://host:port;databaseName=schema" />
</spring:bean>
<jdbc:connector name="SQLServer-jdbcConnector"
dataSource-ref="SQLServer-jdbcDataSource" pollingFrequency="${SQLServer.db.poll}"
transactionPerMessage="false">
<jdbc:query key="read" value="${SQLServer.db.jdbc_query}" />
</jdbc:connector>
<flow name="SQLServer-flow">
<jdbc:inbound-endpoint queryKey="read"
connector-ref="SQLServer-jdbcConnector">
<jdbc:transaction action="ALWAYS_BEGIN"/>
<property key="receiveMessageInTransaction" value="true"/>
</jdbc:inbound-endpoint>
<vm:outbound-endpoint path="${sv.vm.queue.name}" connector-ref="first-level-xform">
<message-properties-transformer scope="outbound">
<add-message-property key="${sv.vm.msg.identifier}" value="SQLServer"/>
</message-properties-transformer>
<vm:transaction action="NONE"/>
</vm:outbound-endpoint>
</flow>
</mule>
When I deploy any one of these floes in mule-deploy.properties, it runs ok. But when I deploy both of these flows at the same time, none of them works. I do not get any error or exception, but it seems like none of these flows run.
Any ideas what might be wrong? Probably something related to JDBC transactions?

if you have 2 different flows in same application .. then remove :-
<spring:bean id="SQLServer-jdbcDataSource"
class="org.enhydra.jdbc.standard.StandardDataSource" destroy-method="shutdown">
<spring:property name="driverName"
value="com.microsoft.sqlserver.jdbc.SQLServerDriver" />
<spring:property name="url"
value="jdbc:sqlserver://host:port;databaseName=schema" />
</spring:bean>
<jdbc:connector name="SQLServer-jdbcConnector"
dataSource-ref="SQLServer-jdbcDataSource" pollingFrequency="${SQLServer.db.poll}"
transactionPerMessage="false">
<jdbc:query key="read" value="${SQLServer.db.jdbc_query}" />
</jdbc:connector>
from any of the flow ... Since it is declared as global ... it can be used from any of the flow in the application .. what I mean is .. just remove the above line of code from one of the flow ... since you have declared in both the flow I guess it's a duplicate and not required to place in each and every flow ... and yes just mention the reference connector-ref="SQLServer-jdbcConnector" in both the mule flow ... ex:-
<jdbc:inbound-endpoint queryKey="read"
connector-ref="SQLServer-jdbcConnector">

Related

ESB WSO2 - Connecting to a local MySQL database ["Registry entry defined with key: com.mysql.jdbc.Driver not found"]

First of all I want to say that I'm a beginner with ESB WSO2.
I want to connect to a MySQL DataBase and I get this error:
"Error DB Mediator datasource: null.Registry entry defined with key: com.mysql.jdbc.Driver not found."
"DataSource: null was not initialized for given JNDI properties"
This is my code:
<?xml version="1.0" encoding="UTF-8"?>
<api context="/api/dbtask" name="api.dbtask" xmlns="http://ws.apache.org/ns/synapse">
<resource methods="GET">
<inSequence>
<dblookup>
<connection>
<pool>
<driver key="com.mysql.jdbc.Driver"/>
<url key="jdbc:mysql://localhost:3306/utilizatori"/>
<user key="root"/>
<password key="1234"/>
</pool>
</connection>
<statement>
<sql><![CDATA[SELECT * FROM people WHERE id=1;]]></sql>
<parameter expression="//m0:getQuote/m0:request/m0:symbol" type="VARCHAR" xmlns:m0="https://services.samples"/>
<result column="nume" name="nume"/>
</statement>
</dblookup>
<!--
<log level="custom">
<property name="ID" expression="get-property('id')" />
<property name="NAME" expression="get-property('nume')" />
<property name="AGE" expression="get-property('varsta')" />
</log>
-->
<respond/>
</inSequence>
<outSequence/>
<faultSequence/>
</resource>
</api>
There is an issue with the connection pool you have added. The above format is used to get the configuration values from the registry [1],[2]. If you want to define the connection pool inline you need to use the following format [3].
<dblookup xmlns="http://ws.apache.org/ns/synapse">
<connection>
<pool>
<driver>org.apache.derby.jdbc.ClientDriver</driver>
<url>jdbc:derby://localhost:1527/esbdb;create=false</url>
<user>esb</user>
<password>esb</password>
</pool>
</connection>
<statement>
<sql><![CDATA[select * from company where name =?]]></sql>
<parameter expression="//m0:getQuote/m0:request/m0:symbol" type="VARCHAR" xmlns:m0="http://services.samples/xsd"/>
<result column="id" name="company_id"/>
</statement>
</dblookup>
[1]-https://docs.wso2.com/display/EI640/Managing+the+Registry
[2]-https://ei.docs.wso2.com/en/7.2.0/micro-integrator/references/mediators/dBLookup-Mediator/#connection-pool-configurations
[3]-https://ei.docs.wso2.com/en/7.2.0/micro-integrator/references/mediators/dBLookup-Mediator/#example

MULE ESB results from database as JSON array

I am using MULE ESB and have a flow which is designed to pull all the results out of the Mysql Database and place all the results in one JSON file. However I am gettign the results as separate JSON files, not one JSON file (which is the desired outcome)
Here is my config file
<context:property-placeholder location="classpath:mysql.properties,classpath:smtp.properties" />
<smtp:connector name="emailConnector" fromAddress="${smtp.from}" subject="${smtp.subject}" doc:name="SMTP" validateConnections="true"/>
<jdbc-ee:connector name="jdbcConnector" dataSource-ref="MySQL_Data_Source" validateConnections="false" queryTimeout="10" pollingFrequency="10000" doc:name="JDBC">
<jdbc-ee:query key="Users" value="SELECT * FROM test ORDER BY id ASC"></jdbc-ee:query>
</jdbc-ee:connector>
<jdbc-ee:mysql-data-source name="MySQL_Data_Source" user="${mysql.user}" password="${mysql.password}" url="${mysql.url}" transactionIsolation="UNSPECIFIED" doc:name="MySQL Data Source"></jdbc-ee:mysql-data-source>
<flow name="flows1Flow1" >
<jdbc-ee:inbound-endpoint queryKey="Users" connector-ref="jdbcConnector" doc:name="JDBC"></jdbc-ee:inbound-endpoint>
<json:object-to-json-transformer doc:name="Object to JSON"/>
<file:outbound-endpoint path="C:\Users\IEUser\Desktop\New folder" doc:name="File" responseTimeout="10000"></file:outbound-endpoint>
</flow>
What version of Mule are you using? the jdbc connector you are using is deprecated in 3.5+. I was able to get the result you are expecting using the config below in 3.7.1:
<db:mysql-config name="MySQL_Configuration" host="localhost"
port="" user="" database="test" pass="" doc:name="MySQL Configuration" />
<flow name="flows1Flow1">
<poll doc:name="Poll">
<db:select config-ref="MySQL_Configuration" doc:name="Database">
<db:parameterized-query><![CDATA[SELECT * FROM test.people]]></db:parameterized-query>
</db:select>
</poll>
<json:object-to-json-transformer
doc:name="Object to JSON" />
<logger level="ERROR" message="#[payload]" doc:name="Logger" />
<file:outbound-endpoint path="./people"
doc:name="File" responseTimeout="10000" />
</flow>
HTH

Execute action after all split messages have been processed

I have a Mule 3.3.0 flow which splits a file into records. I need to execute an action (stored procedure) AFTER ALL records have finished processing.
The problem is that sometimes the action gets executed before all records have been processed by Mule. I think this is due to the fact that Mule process stuff in parallel, which is great, so sometimes the final action gets called too early.
If I set the flow as synchronous things appear to work, but I'm not taking advantage of parallel execution.
I think I could also use a Foreach scope (haven't tried) but I guess that stuff will still not be parallelized.
Is there a way to "wait" until all records finish processing?
I'm attaching a very simple flow which exhibits this behaviour. If you run it you will see that the loggers don't print stuff in order. Actually, the "DONE" message gets logged before the rest.
The flow processes a simple csv file auntil it matches a field with value "end". There is a choice component which loggs "DONE" when such field is found. The rest of the fields simply get logged.
Any help will be greatly appreciated.
Flow:
Flow xml
<?xml version="1.0" encoding="UTF-8"?>
<mule xmlns:scripting="http://www.mulesoft.org/schema/mule/scripting"
xmlns:vm="http://www.mulesoft.org/schema/mule/vm" xmlns:file="http://www.mulesoft.org/schema/mule/file"
xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:doc="http://www.mulesoft.org/schema/mule/documentation"
xmlns:spring="http://www.springframework.org/schema/beans" version="CE-3.3.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.mulesoft.org/schema/mule/vm http://www.mulesoft.org/schema/mule/vm/current/mule-vm.xsd
http://www.mulesoft.org/schema/mule/file http://www.mulesoft.org/schema/mule/file/current/mule-file.xsd
http://www.mulesoft.org/schema/mule/scripting http://www.mulesoft.org/schema/mule/scripting/current/mule-scripting.xsd
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-current.xsd
http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd ">
<file:connector name="inputFileConnector" autoDelete="true"
streaming="false" validateConnections="true" doc:name="File" fileAge="60000"
readFromDirectory="#{systemProperties['user.home']}" />
<flow name="flow1" doc:name="flow1" processingStrategy="synchronous">
<file:inbound-endpoint path="#{systemProperties['user.home']}"
responseTimeout="10000" doc:name="Input File" fileAge="100"
connector-ref="inputFileConnector">
<file:filename-regex-filter pattern="input.csv"
caseSensitive="false" />
</file:inbound-endpoint>
<byte-array-to-string-transformer
doc:name="Byte Array to String" />
<scripting:component doc:name="Groovy">
<scripting:script engine="Groovy">
<scripting:text><![CDATA[return payload.split('\n');]]></scripting:text>
</scripting:script>
</scripting:component>
<collection-splitter doc:name="Collection Splitter" />
<choice doc:name="Choice">
<when expression="#[groovy:payload != 'end']">
<processor-chain>
<logger message="." level="INFO" doc:name="Process"/>
<vm:outbound-endpoint path="toFlow2" doc:name="VM"/>
</processor-chain>
</when>
<otherwise>
<processor-chain>
<logger message="|||| DONE" level="INFO" doc:name="DONE"/>
</processor-chain>
</otherwise>
</choice>
</flow>
<flow name="flow2" doc:name="flow2" >
<vm:inbound-endpoint path="toFlow2" doc:name="VM"/>
<scripting:component doc:name="Groovy">
<scripting:script engine="Groovy">
<scripting:text><![CDATA[return payload.split(',');]]></scripting:text>
</scripting:script>
</scripting:component>
<collection-splitter doc:name="Collection Splitter" />
<logger message="|||||| #[payload]" level="INFO" doc:name="Logger"/>
<vm:outbound-endpoint path="toFlow3" doc:name="VM"/>
</flow>
One option is to use a collection-aggregator to act as an accumulator, blocking the final flow action until all the messages have been processed. The trick is that the collection-splitters will set up a correlation group size that is only good for either the number of lines in the file or the number of columns in the file. But we want to accumulate until all columns of all lines have been processed. The solution consists in computing first this value (ie total number of expected messages) and overriding whatever correlation group size had been calculated the collection-splitters with the total value.
Here is how I've done this (you'll note that I replaced all Groovy snippets with more Mule-3-esque MEL expressions):
<file:connector name="inputFileConnector" autoDelete="true"
streaming="false" validateConnections="true" fileAge="60000"
readFromDirectory="#{systemProperties['user.home']}" />
<flow name="flow1" processingStrategy="synchronous">
<file:inbound-endpoint path="#{systemProperties['user.home']}"
responseTimeout="10000" fileAge="100"
connector-ref="inputFileConnector">
<file:filename-regex-filter pattern="input.csv"
caseSensitive="false" />
</file:inbound-endpoint>
<byte-array-to-string-transformer />
<set-session-variable variableName="expectedMessageCount"
value="#[org.mule.util.StringUtils.countMatches(message.payload, '\n') + org.mule.util.StringUtils.countMatches(message.payload, ',') - 1]" />
<expression-transformer expression="#[message.payload.split('\n')]" />
<collection-splitter enableCorrelation="IF_NOT_SET" />
<set-property propertyName="MULE_CORRELATION_GROUP_SIZE"
value="#[sessionVars.expectedMessageCount]" />
<choice>
<when expression="#[message.payload != 'end']">
<processor-chain>
<logger message="." level="INFO" />
<vm:outbound-endpoint path="toFlow2" />
</processor-chain>
</when>
<otherwise>
<processor-chain>
<logger message="|||| END" level="INFO" />
</processor-chain>
</otherwise>
</choice>
</flow>
<flow name="flow2">
<vm:inbound-endpoint path="toFlow2"/>
<expression-transformer expression="#[message.payload.split(',')]" />
<collection-splitter />
<set-property propertyName="MULE_CORRELATION_GROUP_SIZE"
value="#[sessionVars.expectedMessageCount]" />
<logger message="|||||| #[message.payload]" level="INFO"/>
<vm:outbound-endpoint path="toFinalizer" />
<vm:outbound-endpoint path="toFlow3" />
</flow>
<flow name="finalizer">
<vm:inbound-endpoint path="toFinalizer" />
<collection-aggregator />
<logger message="|||| DONE" level="INFO" />
</flow>
NB. Alternatively, if using a collection-aggregator is an issue because it uses too much memory, you could use an expression component to decrement sessionVars.expectedMessageCount and filter to let a message hit the final message processor when the counter is back to 0.

Best way for logging exceptions in Spring.NET with Log4net or nlog

I would like to know which way of log exception in Spring.NET is prefered and why.
I found two common scenarios.
1.Use IThrowAdvice.
I created throws advice and in method AfterThrowing handle / log exception.
namespace Aspects
{
public class ExLogThrowsAdvice : IThrowsAdvice
{
private ILog _logger;
public ExLogThrowsAdvice()
{
_logger = LogManager.GetLogger("Error_file");
}
public void AfterThrowing(MethodInfo methodInfo,
Object []args, Object target, Exception exception)
{
_logger.Error(exception);
}
}
}
and use Common.Loggin API (Common Loggin API) for configuring for example Log4net for logging.
<sectionGroup name="common">
<section name="logging" type="Common.Logging.ConfigurationSectionHandler, Common.Logging"/>
</sectionGroup>
<section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net"/>
<log4net>
<appender name="ErrorFileAppender"
type="log4net.Appender.FileAppender">
<file value="errors.txt"/>
<filter type="log4net.Filter.LevelRangeFilter">
<levelMin value="ERROR" />
<levelMax value="FATAL" />
</filter>
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date%newline%username%newline[%thread] %message %newline"/>
</layout>
</appender>
<root>
<level value="ERROR"/>
<appender-ref ref="ErrorFileAppender"/>
</root>
</log4net>
And last, create a proxy for the object in the businees layer.
<!--ex log advice-->
<object id="theExLogAdvice" type="Aspects.ExLogThrowsAdvice, ExceptionLogging"/>
<!--auto proxy creator-->
<object type="Spring.Aop.Framework.AutoProxy.TypeNameAutoProxyCreator, Spring.Aop">
<property name="TypeNames" value="Aspects*"/>
<property name="InterceptorNames">
<list>
<value>theExLogAdvice</value>
</list>
</property>
</object>
This is first concept. The second which I found is to use aspect fo exception handling from the Spring Aspect library.
2.Exception aspects from Spring.NET
I would like create a handler for log exception and this handler will use the Log4net logger.
Handler for exception:
<object id="exLogHandler"
type="Spring.Aspects.Exceptions.LogExceptionHandler, Spring.Aop">
<property name="LogName" value="???"/>
<property name="LogLevel" value="Error"/>
</object>
and then use this handler in exception handle advice:
<object id="exLogAspect"
type="Spring.Aspects.Exceptions.ExceptionHandlerAdvice, Spring.Aop">
<property name="ExceptionHandlerDictionary">
<dictionary>
<entry key="log" ref="exLogHandler"/>
</dictionary>
</property>
<property name="ExceptionHandlers">
<list>
<value>on exception name SomeException log 'Ex:' + #e</value>
</list>
</property>
I am not sure if second way is good. Maybe it is stupidity.
It is possible configure LogExceptionHandler to use the Log4net logger?
I'm not sure if it's the best, but a SimpleLoggingAdvice logs exceptions for you. Furthermore, you can configure a SimpleLoggingAdvice to log execution time, method arguments and return values. Configuration looks like this (from the docs):
<object name="loggingAdvice" type="Spring.Aspects.Logging.SimpleLoggingAdvice, Spring.Aop">
<property name="LogUniqueIdentifier" value="true"/>
<property name="LogExecutionTime" value="true"/>
<property name="LogMethodArguments" value="true"/>
<property name="LogReturnValue" value="true"/>
<property name="Separator" value=";"/>
<property name="LogLevel" value="Info"/>
<property name="HideProxyTypeNames" value="true"/>
<property name="UseDynamicLogger" value="true"/>
</object>
Of course, you still have to configure a proxy factory and logging, but you know how to do that already.

How do I determine what target is calling my current target in Nant?

I am modifying a Nant build script to run some unit tests. I have different targets for locally run tests and tests to be run on team city.
<target name="run-unit-tests">
<property name="test.executable" value="tools\nunit\nunit-console.exe"/>
<call target="do-unit-tests"/>
</target>
<target name="run-unit-tests-teamcity">
<property name="test.executable" value="${teamcity.dotnet.nunitlauncher}"/>
<call target="do-unit-tests"/>
</target>
in the target do-unit-tests I set up which test assemblies are run by setting a property and calling for NCover to do a code coverage run as follows:
<target name="do-unit-test">
<property name="test.assemblies" value="MyProject.dll">
<call target="do-unit-test-coverage" />
</target>
<target name="do-unit-test-coverage">
<ncover <!--snip -->
commandLineArgs="${test.args}"
<!--snip-->
</ncover>
</target>
As you can see in the ncover part I need a property called "test.args". This property depends on "test.assemblies"
ie: <property name="test.args" value="${test.assemblies} <!--snip -->" />
test.args needs to be set up differently between the locally run unit test and the one on team city...so I'm trying to figure out how to set this up.
if i put the property for test.args in "do-unit-test" after the property "test.assemblies" I can't specify one test.args if do-unit-test is called by run-unit-tests and another for run-unit-tests-teamcity.
I've been trying to do something like the following in "do-unit-test":
<if test="${target::exists('run-unit-tests-teamcity')}">
<property name="test.args" value="..." />
</if>
but obviously that doesn't work because the target will always exist.
What I'd like then is to test if my current target do-unit-test has been called by run-unit-tests-teamcity
Is this possible? I can't see it in the Nant documentation? Since its not there it either means that it will be a feature in the future or that I'm not understanding how things are specified in a Nant build script.
You can define properties in one target, and use their values in the other... For example, you can define
<target name="run-unit-tests">
<property name="test.executable" value="tools\nunit\nunit-console.exe"/>
<property name="test.extratestargs" value="foo,bar,baz"/>
<call target="do-unit-tests"/>
</target>
<target name="run-unit-tests-teamcity">
<property name="test.executable" value="${teamcity.dotnet.nunitlauncher}"/>
<property name="test.extrtestargs" value="foo,baz,quux,xyzzy"/>
<call target="do-unit-tests"/>
</target>
<target name="do-unit-test-coverage">
<property name="test.args" value="${test.assemblies} ${test.extratestargs} <!--snip -->" />
<ncover <!--snip -->
commandLineArgs="${test.args}" >
<!--snip-->
</ncover>
</target>
Or if you need them to be structured completely differently, not just have some different values, take advantage of the fact that the property substitution is delayed:
<?xml version="1.0"?>
<project name="nanttest">
<target name="run-unit-tests">
<property name="test.executable" value="tools\nunit\nunit-console.exe"/>
<property name="test.args" value="foo bar -assembly ${test.assemblies} baz" dynamic="true"/>
<call target="do-unit-test"/>
</target>
<target name="run-unit-tests-teamcity">
<property name="test.executable" value="${teamcity.dotnet.nunitlauncher}"/>
<property name="test.args" value="foo,baz,quux /a:${test.assemblies} xyzzy" dynamic="true"/>
<call target="do-unit-test"/>
</target>
<target name="do-unit-test-coverage">
<echo message="test.executable = ${test.executable}, test.args = ${test.args}" />
</target>
<target name="do-unit-test">
<property name="test.assemblies" value="MyProject.dll"/>
<call target="do-unit-test-coverage" />
</target>
</project>
user#host:/tmp/anttest$ nant run-unit-tests
[...snip...]
run-unit-tests:
do-unit-test:
do-unit-test-coverage:
[echo] test.executable = tools\nunit\nunit-console.exe, test.args = foo bar -assembly MyProject.dll baz
BUILD SUCCEEDED
Total time: 0 seconds.
user#host:/tmp/anttest$ nant -D:teamcity.dotnet.nunitlauncher=nunitlauncher run-unit-tests-teamcity
[...snip...]
run-unit-tests-teamcity:
do-unit-test:
do-unit-test-coverage:
[echo] test.executable = nunitlauncher, test.args = foo,baz,quux /a:MyProject.dll xyzzy
BUILD SUCCEEDED
Total time: 0 seconds.
If you really, really just need to know if you're running in TeamCity, then this should help:
<target name="run-unit-tests-teamcity">
<property name="test.executable" value="${teamcity.dotnet.nunitlauncher}"/>
<property name="running.in.teamcity" value="true"/>
<call target="do-unit-tests"/>
</target>
I've managed to solve the problem. I don't know if it's the best solution but it is a solution.
I set a property called test.type and then use an if statement to determine which target it came from.
<target name="run-unit-tests">
<property name="test.executable" value="tools\nunit\nunit-console.exe"/>
<property name="test.type" value="unit-tests" />
<call target="do-unit-tests"/>
</target>
<target name="run-unit-tests-teamcity">
<property name="test.executable" value="${teamcity.dotnet.nunitlauncher}"/>
<property name="test.type" value="unit-tests-teamcity" />
<call target="do-unit-tests"/>
</target>
<target name="do-unit-test">
<property name="test.assemblies" value="MyProject.dll">
<call target="do-unit-test-coverage" />
</target>
<target name="do-unit-test-coverage">
<if test="${test.type=='unit-tests'}">
<property name="test.args" value="${test.assemblies} ..."/>
</if>
<if test="${test.type=='unit-tests-teamcity'}">
<property name="test.args" value="... ${test.assemblies}"/>
</if>
<ncover <!--snip -->
commandLineArgs="${test.args}"
<!--snip-->
</ncover>
</target>