Mule - Catch Exception strategy isn't being called for specified exception - exception

I'm trying to implement an error handling strategy for my Mule application which is writing data to a database. I have the following example of an exception that is thrown when trying to insert a record that has a duplicate identifier:
ERROR 2014-03-17 16:02:15,990 [[processes].inputConnector.receiver.01] org.mule.exception.DefaultMessagingExceptionStrategy:
********************************************************************************
Message : Failed to route event via endpoint: DefaultOutboundEndpoint{endpointUri=jdbc://insertRecord, connector=JdbcConnector
{
name=dbQueries
lifecycle=start
this=4c6f44a0
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=false
connected=true
supportedProtocols=[jdbc]
serviceOverrides=<none>
}
, name='endpoint.jdbc.insertRecord', mep=ONE_WAY, properties={queryTimeout=-1}, transactionConfig=Transaction{factory=null, action=INDIFFERENT, timeout=0}, deleteUnacceptedMessages=false, initialState=started, responseTimeout=10000, endpointEncoding=UTF-8, disableTransportTransformer=false}. Message payload is of type: HashMap
Code : MULE_ERROR--2
--------------------------------------------------------------------------------
Exception stack is:
1. The statement was aborted because it would have caused a duplicate key value in a unique or primary key constraint or unique index identified by 'SQL140317160209540' defined on 'TESTTABLE'. (org.apache.derby.iapi.error.StandardException)
org.apache.derby.iapi.error.StandardException:-1 (null)
2. The statement was aborted because it would have caused a duplicate key value in a unique or primary key constraint or unique index identified by 'SQL140317160209540' defined on 'TESTTABLE'.(SQL Code: 30000, SQL State: + 23505) (org.apache.derby.impl.jdbc.EmbedSQLException)
org.apache.derby.impl.jdbc.SQLExceptionFactory:-1 (null)
3. The statement was aborted because it would have caused a duplicate key value in a unique or primary key constraint or unique index identified by 'SQL140317160209540' defined on 'TESTTABLE'.(SQL Code: 30000, SQL State: + 23505) (java.sql.SQLIntegrityConstraintViolationException)
org.apache.derby.impl.jdbc.SQLExceptionFactory40:-1 (null)
4. The statement was aborted because it would have caused a duplicate key value in a unique or primary key constraint or unique index identified by 'SQL140317160209540' defined on 'TESTTABLE'. Query: INSERT INTO TestTable (Id, Quantity) VALUES (?, ?) Parameters: [261000153085, 7](SQL Code: 30000, SQL State: + 23505) (java.sql.SQLException)
org.apache.commons.dbutils.QueryRunner:540 (null)
5. Failed to route event via endpoint: DefaultOutboundEndpoint{endpointUri=jdbc://insertRecord, connector=JdbcConnector
{
name=dbQueries
lifecycle=start
this=4c6f44a0
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=false
connected=true
supportedProtocols=[jdbc]
serviceOverrides=<none>
}
, name='endpoint.jdbc.insertRecord', mep=ONE_WAY, properties={queryTimeout=-1}, transactionConfig=Transaction{factory=null, action=INDIFFERENT, timeout=0}, deleteUnacceptedMessages=false, initialState=started, responseTimeout=10000, endpointEncoding=UTF-8, disableTransportTransformer=false}. Message payload is of type: HashMap (org.mule.api.transport.DispatchException)
org.mule.transport.AbstractMessageDispatcher:109 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/transport/DispatchException.html)
--------------------------------------------------------------------------------
Root Exception stack trace:
ERROR 23505: The statement was aborted because it would have caused a duplicate key value in a unique or primary key constraint or unique index identified by 'SQL140317160209540' defined on 'TESTTABLE'.
at org.apache.derby.iapi.error.StandardException.newException(Unknown Source)
at org.apache.derby.impl.sql.execute.IndexChanger.insertAndCheckDups(Unknown Source)
at org.apache.derby.impl.sql.execute.IndexChanger.doInsert(Unknown Source)
+ 3 more (set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
********************************************************************************
I've got a Choice Exception strategy defined to deal with different exceptions and have used the following to handle the case outlined:
<catch-exception-strategy when="#[exception.causeMatches(org.apache.derby.*)]" doc:name="Catch Exception Strategy">
<logger level="ERROR" message="Database error." doc:name="Logger"/>
</catch-exception-strategy>
However, the exception always gets routed to the default strategy and not the catch strategy defined above. I'm assuming this is an issue with my 'when' expression, I'm just not sure how to resolve it. Any help would be appreciated. Thanks

This is a method which takes regular expression as the argument.
Try the folliwing one and it should work.
<catch-exception-strategy when="#[exception.causeMatches('org.?apache.?derby.?.*')]" doc:name="Catch Exception Strategy">
<logger level="ERROR" message="Database error." doc:name="Logger"/>
</catch-exception-strategy>
Hope this helps.

You need a String argument for method causeMatches:
http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/MessagingException.html
Try using quotes: exception.causeMatches('org.apache.derby.*')

If the error is
java.sql.SQLIntegrityConstraintViolationException: ORA-00001: unique constraint violated
This can be handled with the help of the using a condition in the code as shown using important keywords in the error as shown:
catch (SQLException e) {
if(e.getMessage().contains("unique constraint")) {
System.out.println("PID Repeated. Enter a different PID");
}
return -1;
}

Related

MySQL Sink Connector - JsonConverter - DataException: Unknown schema type: null

I have created MySQL sink connector and successfully run and i see the logs and got 200 response, But the sink connector not able to push the data into mysql db (data is available in the topic)
{
"name":"mysql-sink-connector",
"config":{ "tasks.max":"2",
"batch.size":"1000",
"batch.max.rows":"1000",
"poll.interval.ms":"500",
"connector.class":"io.confluent.connect.jdbc.JdbcSinkConnector",
"connection.url":"jdbc:mysql://mysql.azure.com:3306/db_test_dev",
"table.name.format":"tbl_clients_merchants",
"topics":"createorder",
"connection.user":"user",
"connection.password":"password",
"auto.create":"true",
"auto.evolve":"true",
"value.converter":"org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable":"true",
"key.converter":"org.apache.kafka.connect.json.JsonConverter",
"key.converter.schemas.enable":"true"
}}
getting an error
[2021-07-29 09:41:03,157] ERROR WorkerSinkTask{id=jdbc-mysql-sink-connector-1} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:177) org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.kafka.connect.errors.DataException: Converting byte[] to Kafka Connect data failed due to serialization error:
at org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:344)
at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.kafka.connect.errors.DataException: Unknown schema type: null
at org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:743
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162) ... 13 more
[2021-07-29 13:26:11,347] ERROR WorkerSinkTask{id=mysql-sink-connector-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:178)
As the error suggests, you have null tombstone records in the topic, which cannot be mapped to any MySQL columns since there are no field names.
Option 1) Fix your producer to not send these messages
Option 2) Filter them in Connect
For example,
"transforms": "Filter",
"transforms.Filter.type": "org.apache.kafka.connect.transforms.Filter",
"transforms.Filter.predicate" : "DropNull",
"predicates" : "DropNull",
"predicates.DropNull.type": "org.apache.kafka.connect.transforms.predicates.RecordIsTombstone"

JHipster with MySQL back-end fails on first start with "table does not comply with the requirements by an external plugin"

I'm getting the following error when I try to start a new JHipster app using MySQL as the database:
2019-07-30 09:55:40.583 ERROR 35895 --- [-service-task-1] i.g.j.c.liquibase.AsyncSpringLiquibase : Liquibase could not start correctly, your database is NOT ready: The table does not comply with the requirements by an external plugin. [Failed SQL: INSERT INTO compose.DATABASECHANGELOG (ID, AUTHOR, FILENAME, DATEEXECUTED, ORDEREXECUTED, MD5SUM, `DESCRIPTION`, COMMENTS, EXECTYPE, CONTEXTS, LABELS, LIQUIBASE, DEPLOYMENT_ID) VALUES ('00000000000001', 'jhipster', 'config/liquibase/changelog/00000000000000_initial_schema.xml', NOW(), 1, '8:c5bfc567913b118109a43e981cd02883', 'createTable tableName=jhi_user; createTable tableName=jhi_authority; createTable tableName=jhi_user_authority; addPrimaryKey tableName=jhi_user_authority; addForeignKeyConstraint baseTableName=jhi_user_authority, constraintName=fk_authority_name, ...', '', 'EXECUTED', NULL, NULL, '3.6.3', '4491329886')]
liquibase.exception.DatabaseException: The table does not comply with the requirements by an external plugin. [Failed SQL: INSERT INTO compose.DATABASECHANGELOG (ID, AUTHOR, FILENAME, DATEEXECUTED, ORDEREXECUTED, MD5SUM, `DESCRIPTION`, COMMENTS, EXECTYPE, CONTEXTS, LABELS, LIQUIBASE, DEPLOYMENT_ID) VALUES ('00000000000001', 'jhipster', 'config/liquibase/changelog/00000000000000_initial_schema.xml', NOW(), 1, '8:c5bfc567913b118109a43e981cd02883', 'createTable tableName=jhi_user; createTable tableName=jhi_authority; createTable tableName=jhi_user_authority; addPrimaryKey tableName=jhi_user_authority; addForeignKeyConstraint baseTableName=jhi_user_authority, constraintName=fk_authority_name, ...', '', 'EXECUTED', NULL, NULL, '3.6.3', '4491329886')]
at liquibase.executor.jvm.JdbcExecutor$ExecuteStatementCallback.doInStatement(JdbcExecutor.java:356)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:57)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:125)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:109)
at liquibase.changelog.StandardChangeLogHistoryService.setExecType(StandardChangeLogHistoryService.java:384)
at liquibase.database.AbstractJdbcDatabase.markChangeSetExecStatus(AbstractJdbcDatabase.java:1086)
at liquibase.changelog.visitor.UpdateVisitor.visit(UpdateVisitor.java:64)
at liquibase.changelog.ChangeLogIterator.run(ChangeLogIterator.java:83)
at liquibase.Liquibase.update(Liquibase.java:202)
at liquibase.Liquibase.update(Liquibase.java:179)
at liquibase.integration.spring.SpringLiquibase.performUpdate(SpringLiquibase.java:353)
at liquibase.integration.spring.SpringLiquibase.afterPropertiesSet(SpringLiquibase.java:305)
at io.github.jhipster.config.liquibase.AsyncSpringLiquibase.initDb(AsyncSpringLiquibase.java:119)
at io.github.jhipster.config.liquibase.AsyncSpringLiquibase.lambda$afterPropertiesSet$0(AsyncSpringLiquibase.java:94)
at io.github.jhipster.async.ExceptionHandlingAsyncTaskExecutor.lambda$createWrappedRunnable$1(ExceptionHandlingAsyncTaskExecutor.java:78)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLException: The table does not comply with the requirements by an external plugin.
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:129)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
at com.mysql.cj.jdbc.StatementImpl.executeInternal(StatementImpl.java:782)
at com.mysql.cj.jdbc.StatementImpl.execute(StatementImpl.java:666)
at com.zaxxer.hikari.pool.ProxyStatement.execute(ProxyStatement.java:95)
at com.zaxxer.hikari.pool.HikariProxyStatement.execute(HikariProxyStatement.java)
at liquibase.executor.jvm.JdbcExecutor$ExecuteStatementCallback.doInStatement(JdbcExecutor.java:352)
... 17 common frames omitted
It looks like it's failing on a liquibase step. Any ideas why this is happening?
Liquibase does not create the DATABASECHANGELOG table with the required primary key. Try stopping JHipster, running the following SQL commands against your mysql database, and then restarting JHipster:
ALTER TABLE DATABASECHANGELOG
ADD PRIMARY KEY (ID);
drop table jhi_persistent_audit_evt_data;
drop table jhi_persistent_audit_event;
drop table jhi_user_authority;
drop table jhi_authority;
drop table jhi_user;
That should resolve the issue.

Mule JSON validation Schema component avoid error logs

I'm using JSON Validation Schema component and I have notice, that it logs all the errors to the console.
I would like to avoid this error message being displayed in the console.
Even though I have chosen a special exception strategy which has catch exception strategy with JSONValidation Exception and has custom logic implemented and no loggers at all in it, I still see the following error message:
org.mule.api.MessagingException: Json content is not compliant with schema
com.github.fge.jsonschema.core.report.ListProcessingReport: failure
--- BEGIN MESSAGES ---
error: string "blah" is too long (length: 4, maximum allowed: 3)
level: "error"
schema: {"loadingURI":"file:/...}
instance: {"pointer":"/blah_blah_code"}
domain: "validation"
keyword: "maxLength"
value: "blah"
found: 4
maxLength: 3
--- END MESSAGES ---
How could I make mule omit this error message? I don't want these errors to be logged to the console.
You can set the logException attribute of the catch-exception-strategy element to false, forcing mule not to log errors to the console:
<catch-exception-strategy logException="false">
set below logger to false in log4j2.xml
<AsyncLogger name="org.mule.module.apikit.validation.RestJsonSchemaValidator" level="OFF"/>

How to configure rsyslog template for Exception error for remote logging?

I'm using rsyslog to ship logs to a remote Logstash server, and the Logstash on that service expects input data in a json format. How can I configure an rsyslog template to json-ify a exception. For example, I want to send the following exception as a single message.
2017-02-08 21:59:51,727 ERROR :localhost-startStop-1 [jdbc.sqlonly] 1. PreparedStatement.executeBatch() batching 1 statements:
1: insert into CR_CLUSTER_REGISTRY (Cluster_Name, Url, Update_Dttm, Node_Id) values ('customer', 'rmi://ip-10-53-123.123.eu-west-1.compute.internal:1199/2', '02/08/2017 21:59:51.639', '2')
java.sql.BatchUpdateException: [Teradata JDBC Driver] [TeraJDBC 15.00.00.35] [Error 1338] [SQLState HY000] A failure occurred while executing a PreparedStatement batch request. Details of the failure can be found in the exception chain that is accessible with getNextException.
at com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeBatchUpdateException(ErrorFactory.java:148)
at com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeBatchUpdateException(ErrorFactory.java:137)
at com.teradata.jdbc.jdbc_4.TDPreparedStatement.executeBatchDMLArray(TDPreparedStatement.java:272)
at com.teradata.jdbc.jdbc_4.TDPreparedStatement.executeBatch(TDPreparedStatement.java:2584)
at com.teradata.tal.qes.StatementProxy.executeBatch(StatementProxy.java:186)
at net.sf.log4jdbc.StatementSpy.executeBatch(StatementSpy.java:539)
at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:268)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:266)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:167)
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:50)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1028)
at com.teradata.tal.common.persistence.dao.SessionWrapper.flush(SessionWrapper.java:920)
at com.teradata.trm.common.persistence.dao.DaoImpl.save(DaoImpl.java:263)
at com.teradata.trm.common.service.AbstractService.save(AbstractService.java:509)
at com.teradata.trm.common.cluster.Cluster.init(Cluster.java:413)
at com.teradata.trm.common.cluster.NodeConfiguration.initialize(NodeConfiguration.java:182)
at com.teradata.trm.common.context.Initializer.onApplicationEvent(Initializer.java:73)
at com.teradata.trm.common.context.Initializer.onApplicationEvent(Initializer.java:30)
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:97)
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:324)
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:929)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:467)
at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:385)
at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:284)
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:111)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4973)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5467)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:632)
at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1247)
at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1898)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLException: [Teradata Database] [TeraJDBC 15.00.00.35] [Error -2801] [SQLState 23000] Duplicate unique prime key error in CIM_META.CR_CLUSTER_REGISTRY.
at com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeDatabaseSQLException(ErrorFactory.java:301)
at com.teradata.jdbc.jdbc_4.statemachine.ReceiveInitSubState.action(ReceiveInitSubState.java:114)
at com.teradata.jdbc.jdbc_4.statemachine.StatementReceiveState.subStateMachine(StatementReceiveState.java:311)
at com.teradata.jdbc.jdbc_4.statemachine.StatementReceiveState.action(StatementReceiveState.java:200)
at com.teradata.jdbc.jdbc_4.statemachine.StatementController.runBody(StatementController.java:137)
at com.teradata.jdbc.jdbc_4.statemachine.PreparedBatchStatementController.run(PreparedBatchStatementController.java:58)
at com.teradata.jdbc.jdbc_4.TDStatement.executeStatement(TDStatement.java:387)
at com.teradata.jdbc.jdbc_4.TDPreparedStatement.executeBatchDMLArray(TDPreparedStatement.java:252)
... 37 more
I have the following rsyslog configuration file. The startmsg.regex aims to "flag" the start of a new message when it sees the "YYYY-mm-dd" date format, and until it sees that format, it should treat any text following the date format as part of the current message.
input(type="imfile"
File="/usr/share/tomcat/dist/logs/trm-error.log*"
Facility="local3"
Tag="trm-error:"
Severity="error"
startmsg.regex="^[[:digit:]]{4}-[[:digit:]]{2}-[[:digit:]]{2}"
escapeLF="on"
)
if $programname == 'trm-error:' then {
action(
type="omfwd"
Target="10.53.234.234"
Port="5514"
Protocol="udp"
template="textLogTemplate"
)
stop
}
..and the following template.
# Template for non json logs, just sends the message wholesale with extra
# # furniture.
template(name="textLogTemplate" type="list") {
constant(value="{ ")
constant(value="\"type\":\"")
property(name="programname")
constant(value="\", ")
constant(value="\"host\":\"")
property(name="hostname")
constant(value="\", ")
constant(value="\"timestamp\":\"")
property(name="timestamp" dateFormat="rfc3339")
constant(value="\", ")
constant(value="\"#version\":\"1\", ")
constant(value="\"customer\":\"customer\", ")
constant(value="\"role\":\"app2\", ")
constant(value="\"sourcefile\":\"")
property(name="$!metadata!filename")
constant(value="\", ")
constant(value="\"message\":\"")
property(name="rawmsg" format="json")
constant(value="\"}\n")
}
However, Logstash complains about a "jsonparseerror" when it tries to parse the log as a json file. Any clues?
The rsyslog configuration files I'm using are correct, that is, Java exception log is indeed wrapped into a valid JSON file. However, Logstash is complaining about a _jsonparsefailure, so this problem is probably related to Logstash Ruby code, and not on the rsyslog side.

ESB Mule Client staring with xml-properties fails

I use Mule 3.x
I have a code that tests MuleClient connectivity.
This test is ok and works proper way:
public void testHello() throws Exception
{
MuleClient client = new MuleClient(muleContext);
MuleMessage result = client.send("http://127.0.0.1:8080/hello", "some data", null);
assertNotNull(result);
assertNull(result.getExceptionPayload());
assertFalse(result.getPayload() instanceof NullPayload);
//TODO Assert the correct data has been received
assertEquals("hello", result.getPayloadAsString());
}
But this tes is not ok - it fail with an connection exceptions:
public void testHello_with_Spring() throws Exception {
MuleClient client = new MuleClient("mule-config-test.xml");
client.getMuleContext().start();
//it fails here
MuleMessage result = client.send("http://127.0.0.1:8080/hello", "some data", null);
assertNotNull(result);
assertNull(result.getExceptionPayload());
assertFalse(result.getPayload() instanceof NullPayload);
//TODO Assert the correct data has been received
assertEquals("hello", result.getPayloadAsString());
}
The 'mule-config-test.xml' is used in both tests, the path for this file is ok, i checked.
This is error message I have in the end:
Exception stack is:
1. Address already in use (java.net.BindException) java.net.PlainSocketImpl:-2 (null)
2. Failed to bind to uri "http://127.0.0.1:8080/hello" (org.mule.transport.ConnectException)
org.mule.transport.tcp.TcpMessageReceiver:81
(http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/transport/ConnectException.html)
-------------------------------------------------------------------------------- Root Exception stack trace: java.net.BindException: Address already in
use at java.net.PlainSocketImpl.socketBind(Native Method) at
java.net.PlainSocketImpl.bind(PlainSocketImpl.java:383) at
java.net.ServerSocket.bind(ServerSocket.java:328)
+ 3 more (set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
[10-05 16:33:37] ERROR HttpConnector [main]:
org.mule.transport.ConnectException: Failed to bind to uri
"http://127.0.0.1:8080/hello" [10-05 16:33:37] ERROR ConnectNotifier
[main]: Failed to connect/reconnect: HttpConnector {
name=connector.http.mule.default lifecycle=stop this=7578a7d9
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=true connected=false
supportedProtocols=[http] serviceOverrides= } . Root Exception
was: Address already in use. Type: class java.net.BindException [10-05
16:33:37] ERROR DefaultSystemExceptionStrategy [main]: Failed to bind
to uri "http://127.0.0.1:8080/hello"
org.mule.api.lifecycle.LifecycleException: Cannot process event as
"connector.http.mule.default" is stopped
I think the problem is in what you're not showing: testHello_with_Spring() is probably executing while Mule is already running. The second Mule you're starting in it then port-conflicts with the other one.
Are testHello() and testHello_with_Spring() in the same test suite? If yes, seeing that testHello() relies on an already running Mule, I'd say that would be the cause of port conflict for testHello_with_Spring().