Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I'm trying to log my test suite results using AspectJ. I'd like to 'inject' the results identification code after each #Test method in my code, so created an aspect with the following method:
#After("execution(* *(..)) && #annotation(org.junit.Test)")
public void afterTestMethod(JoinPoint joinPoint) {
//identify test result
}
However, can't find how to retrieve the test method result (passed/failed/skipped).
Any suggestions?
Thanks!
A) JUnit run listener
I am assuming that you build your project with something like Maven or Gradle and will show you a Maven example for a JUnit 4 RunListener:
In module test-tools you would add
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.13</version>
<!--
We build something with the JUnit API, not just run a test,
so scope 'test' is not enough here
-->
<scope>compile</scope>
</dependency>
to your POM and then have something like this in src/main/java/...:
package de.scrum_master.testing;
import org.junit.runner.Description;
import org.junit.runner.Result;
import org.junit.runner.notification.Failure;
import org.junit.runner.notification.RunListener;
public class ResultPrintingRunListener extends RunListener {
#Override
public void testRunStarted(Description description) {
System.out.println("[RunStarted] description = " + description);
}
#Override
public void testRunFinished(Result result) {
System.out.println("[RunFinished] result = " + result);
System.out.println(" run count = " + result.getRunCount());
System.out.println(" failure count = " + result.getFailureCount());
System.out.println(" assumption failure count = " + result.getAssumptionFailureCount());
System.out.println(" ignored count = " + result.getIgnoreCount());
System.out.println(" run time (ms) = " + result.getRunTime());
}
#Override
public void testSuiteStarted(Description description) {
System.out.println("[SuiteStarted] description = " + description);
}
#Override
public void testSuiteFinished(Description description) {
System.out.println("[SuiteFinished] description = " + description);
}
#Override
public void testStarted(Description description) {
System.out.println("[Started] description = " + description);
}
#Override
public void testFinished(Description description) {
System.out.println("[Finished] description = " + description);
}
#Override
public void testFailure(Failure failure) {
System.out.println("[Failure] failure = " + failure);
}
#Override
public void testAssumptionFailure(Failure failure) {
System.out.println("[AssumptionFailure] failure = " + failure);
}
#Override
public void testIgnored(Description description) {
System.out.println("[Ignored] description = " + description);
}
}
Instead of logging you would put your API calls in there at the appropriate places.
In your application test module you would add the test-tools module as a dependency and then configure your Maven Surefire/Failsafe plugins like this:
<plugin>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.22.2</version>
<configuration>
<properties>
<property>
<name>listener</name>
<value>de.scrum_master.testing.ResultPrintingRunListener</value>
</property>
</properties>
</configuration>
</plugin>
If then you run a test like this in Maven, ...
package de.scrum_master.agent.aspect;
import org.junit.Assert;
import org.junit.Ignore;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.TestWatcher;
import org.junit.runner.Description;
public class MyTest {
#Test
public void one() {
Assert.assertEquals("xander", "Alexander".substring(3));
}
#Test
public void two() {
Assert.assertEquals("Alex", "Alexander".substring(3));
}
#Test
public void three() {
Assert.assertEquals(11, 1 / 0);
}
#Test
#Ignore
public void four() {
Assert.assertNull(null);
}
}
... Maven prints:
[INFO] -------------------------------------------------------
[INFO] T E S T S
[INFO] -------------------------------------------------------
[RunStarted] description = null
[INFO] Running de.scrum_master.agent.aspect.MyTest
[SuiteStarted] description = de.scrum_master.agent.aspect.MyTest
[Started] description = one(de.scrum_master.agent.aspect.MyTest)
[Finished] description = one(de.scrum_master.agent.aspect.MyTest)
[Started] description = two(de.scrum_master.agent.aspect.MyTest)
[Failure] failure = two(de.scrum_master.agent.aspect.MyTest): expected:<[Alex]> but was:<[xander]>
[Finished] description = two(de.scrum_master.agent.aspect.MyTest)
[Ignored] description = four(de.scrum_master.agent.aspect.MyTest)
[Started] description = three(de.scrum_master.agent.aspect.MyTest)
[Failure] failure = three(de.scrum_master.agent.aspect.MyTest): / by zero
[Finished] description = three(de.scrum_master.agent.aspect.MyTest)
[SuiteFinished] description = de.scrum_master.agent.aspect.MyTest
[ERROR] Tests run: 4, Failures: 1, Errors: 1, Skipped: 1, Time elapsed: 0.11 s <<< FAILURE! - in de.scrum_master.agent.aspect.MyTest
[ERROR] de.scrum_master.agent.aspect.MyTest.two Time elapsed: 0.007 s <<< FAILURE!
org.junit.ComparisonFailure: expected:<[Alex]> but was:<[xander]>
at de.scrum_master.agent.aspect.MyTest.two(MyTest.java:31)
[ERROR] de.scrum_master.agent.aspect.MyTest.three Time elapsed: 0.001 s <<< ERROR!
java.lang.ArithmeticException: / by zero
at de.scrum_master.agent.aspect.MyTest.three(MyTest.java:36)
[RunFinished] result = org.junit.runner.Result#79be0360
run count = 3
failure count = 2
assumption failure count = 0
ignored count = 1
run time (ms) = 0
JUnit test watcher rule
If you prefer something which is independent of a specific JUnit runner right inside your tests, create a base class like this using JUnit TestWatcher:
package de.scrum_master.agent.aspect;
import org.junit.Rule;
import org.junit.rules.TestWatcher;
import org.junit.runner.Description;
public class TestBase {
#Rule(order = Integer.MIN_VALUE)
public TestWatcher testWatcher = new TestWatcher() {
#Override
protected void failed(Throwable e, Description description) {
System.out.println("[TestWatcher failed] description = " + description +", e = " + e);
}
#Override
protected void succeeded(Description description) {
System.out.println("[TestWatcher succeeded] description = " + description);
}
};
}
Then make all your test classes extends TestBase directly or indirectly. If you run the test, e.g. from an IDE, you see (shortened output, only test watcher log):
[TestWatcher succeeded] description = one(de.scrum_master.agent.aspect.MyTest)
[TestWatcher failed] description = two(de.scrum_master.agent.aspect.MyTest), e = org.junit.ComparisonFailure: expected:<[Alex]> but was:<[xander]>
[TestWatcher failed] description = three(de.scrum_master.agent.aspect.MyTest), e = java.lang.ArithmeticException: / by zero
You see, there are fewer events in the test watcher, e.g. ignored tests are not reported.
As much as I love AspectJ (which is how I found this question), I think you should try configuring JUnit appropriately first and will probably be happy with it. If for whatever reason - please explain, if so - you still insist in an AOP solution, please let me know.
(C) Intercept JUnit tests directly
Because you insisted:
package de.scrum_master.aspectj;
import org.aspectj.lang.JoinPoint;
import org.aspectj.lang.annotation.AfterReturning;
import org.aspectj.lang.annotation.AfterThrowing;
import org.aspectj.lang.annotation.Aspect;
#Aspect
public class TestResultInterceptor {
#AfterReturning(value = "execution(* *(..)) && #annotation(org.junit.Test)", returning = "result")
public void allMethods(JoinPoint joinPoint, Object result) {
System.out.println(joinPoint + " -> PASSED");
}
#AfterThrowing(value = "execution(* *(..)) && #annotation(org.junit.Test)", throwing = "throwable")
public void allMethods(JoinPoint joinPoint, Throwable throwable) {
System.out.println(joinPoint + " -> FAILED: " + throwable);
}
}
When running the JUnit test above in my IDE, the console log is:
execution(void de.scrum_master.testing.MyTest.one()) -> PASSED
execution(void de.scrum_master.testing.MyTest.two()) -> FAILED: org.junit.ComparisonFailure: expected:<[Alex]> but was:<[xander]>
execution(void de.scrum_master.testing.MyTest.three()) -> FAILED: java.lang.ArithmeticException: / by zero
I think you can take it from here.
Related
I'm taking my first steps into testing, so don't be strict.
How I can use my custom listener in JUnit 5, if I use Apache Surefire Plugin for running my tests? It TestNG it is easy because I can use annotation #Listeners or write my listener in .xml with the suite of tests. It JUnit I can't find working decision.
My custom listener:
public class OnrTestListener implements TestExecutionListener {
private static final Logger LOG = LogManager.getRootLogger();
#Override
public void executionSkipped(TestIdentifier testIdentifier, String reason) {
LOG.info("SKIPPED Test by reason: {}", reason);
}
#Override
public void executionStarted(TestIdentifier testIdentifier) {
LOG.info("Test {} successfully started.", testIdentifier.getDisplayName());
}
#Override
public void executionFinished(TestIdentifier testIdentifier, TestExecutionResult testExecutionResult) {
if (testExecutionResult.getStatus() != TestExecutionResult.Status.SUCCESSFUL) {
String message = "Page screenshot.";
File screenshot = ScreenshotUtils.takeScreenshot();
ScreenshotUtils.attachToReportPortal(message, screenshot);
}
}
My additional class ScreenshotUtils
public class ScreenshotUtils {
private static final OnrLogger LOG = new OnrLogger();
private ScreenshotUtils() {
}
public static void attachToReportPortal(String message, File screenshot) {
ReportPortal.emitLog(message, "info", new Date(), screenshot);
}
public static File takeScreenshot() {
return ((TakesScreenshot) DriverFactory.getDriver()).getScreenshotAs(OutputType.FILE);
}
}
My tests marked some annotations (because I can't find some decision for making suite) and run my tests like:
mvn clean test -Dgroups=some_tag
How I tried to use my listener:
I tried to use annotation:
#ExtendWith(OnrTestListener.class)
#Tag("all")
public abstract class BaseUITest {
...
}
Using config in surefire plugin
<configuration>
<properties>
<property>
<name>listener</name>
<value>com.google.listeners.OnrTestListener</value>
</property>
<configurationParameters>
junit.jupiter.extensions.autodetection.enabled = true
junit.jupiter.execution.parallel.enabled = true
junit.jupiter.execution.parallel.mode.default = concurrent
junit.jupiter.execution.parallel.mode.classes.default = concurrent
junit.jupiter.execution.parallel.config.strategy = fixed
junit.jupiter.execution.parallel.config.fixed.parallelism = 5
</configurationParameters>
</properties>
</configuration>
But it doesn't work.
I would be grateful for any help. Thank you
You can use the SPI mechanism.
Add a file org.junit.platform.launcher.TestExecutionListener to the /src/main/resources/META-INF/services/ folder.
Then add the full name of your listener {your package}.OnrTestListener to this file.
The listener will be applied automatically.
I have an application where I am trying to distribute reads & writes between two replicas. For some reason JPA is only using my read-replica, not the write replica. The write replica is the primary replica. The result is that when I use JPA to try and write data I get and 'UPDATE command denied' error because it is using the read only datasource. I have tried doing my own annotation and using the #Transactional annotation. Both annotations are called via AOP with the correct datasource but JPA will not use it.
FYI Spring JDBC works correctly via the custom annotation. This is strictly a JPA issue. Below is some code:
My AOP class:
#Aspect
#Order(20)
#Component
public class RouteDataSourceInterceptor {
#Around("#annotation(com.kenect.db.common.annotations.UseDataSource) && execution(* *(..))")
public Object proceed(ProceedingJoinPoint pjp) throws Throwable {
try {
MethodSignature signature = (MethodSignature) pjp.getSignature();
Method method = signature.getMethod();
UseDataSource annotation = method.getAnnotation(UseDataSource.class);
RoutingDataSource.setDataSourceName(annotation.value());
return pjp.proceed();
} finally {
RoutingDataSource.resetDataSource();
}
}
#Around("#annotation(transactional)")
public Object proceed(ProceedingJoinPoint proceedingJoinPoint, Transactional transactional) throws Throwable {
try {
if (transactional.readOnly()) {
RoutingDataSource.setDataSourceName(SQL_READ_REPLICA);
Klogger.info("Routing database call to the read replica");
} else {
RoutingDataSource.setDataSourceName(SQL_MASTER_REPLICA);
Klogger.info("Routing database call to the primary replica");
}
return proceedingJoinPoint.proceed();
} finally {
RoutingDataSource.resetDataSource();
}
}
}
My RoutingDataSource class:
public class RoutingDataSource extends AbstractRoutingDataSource {
private static final ThreadLocal<String> currentDataSourceName = new ThreadLocal<>();
public static synchronized void setDataSourceName(String name) {
currentDataSourceName.set(name);
}
public static synchronized void resetDataSource() {
currentDataSourceName.remove();
}
#Override
protected Object determineCurrentLookupKey() {
return currentDataSourceName.get();
}
}
AbstractDynamicDataSourceConfig
public abstract class AbstractDynamicDataSourceConfig {
private final ConfigurableEnvironment environment;
public AbstractDynamicDataSourceConfig(ConfigurableEnvironment environment) {
this.environment = environment;
}
protected DataSource getRoutingDataSource() {
Map<String, String> props = DBConfigurationUtils.getAllPropertiesStartingWith("spring.datasource", environment);
List<String> dataSourceNames = DBConfigurationUtils.getDataSourceNames(props.keySet());
RoutingDataSource routingDataSource = new RoutingDataSource();
Map<Object, Object> dataSources = new HashMap<>();
DataSource masterDataSource = null;
for (String name : dataSourceNames) {
DataSource dataSource = getDataSource("spring.datasource." + name);
dataSources.put(name, dataSource);
if (masterDataSource == null && name.toLowerCase().contains("master")) {
masterDataSource = dataSource;
}
}
if (dataSources.isEmpty()) {
throw new KenectInvalidParameterException("No datasources found.");
}
routingDataSource.setTargetDataSources(dataSources);
if (masterDataSource == null) {
masterDataSource = (DataSource) dataSources.get(dataSourceNames.get(0));
}
routingDataSource.setDefaultTargetDataSource(masterDataSource);
return routingDataSource;
}
protected DataSource getDataSource(String prefix) {
HikariConfig hikariConfig = new HikariConfig();
hikariConfig.setJdbcUrl(environment.getProperty(prefix + ".jdbcUrl"));
hikariConfig.setUsername(environment.getProperty(prefix + ".username"));
hikariConfig.setPassword(environment.getProperty(prefix + ".password"));
return new HikariDataSource(hikariConfig);
}
}
application.yaml
spring:
datasource:
master:
jdbcUrl: jdbc:mysql://my-main-replica
username: some-user
password: some-password
read-replica:
jdbcUrl: jdbc:mysql://my-read-replica
username: another-user
password: another-password
If I use the annotation on with JDBC template then it works as expected:
THIS WORKS:
// Uses main replica as it is not specified
public Message insertMessage(Message message) {
String sql = "INSERT INTO message(" +
" `conversationId`," +
" `body`)" +
" VALUE (" +
" :conversationId," +
" :body" +
")";
MapSqlParameterSource parameters = new MapSqlParameterSource();
parameters.addValue("conversationId", message.getConversationId());
parameters.addValue("body", message.getBody());
namedJdbcTemplate.update(sql, parameters);
}
// Uses read replica
#UseDataSource(SQL_READ_REPLICA)
public List<Message> getMessage(long id) {
MapSqlParameterSource parameters = new MapSqlParameterSource();
parameters.addValue("id", id);
String sql = "SELECT " +
" conversationId," +
" body" +
" FROM message"
" WHERE id = :id";
return namedJdbcTemplate.query(sql, parameters, new BeanPropertyRowMapper<>(Message.class));
}
If I use a JPA interface it always uses the read replica:
THIS FAILS:
#Repository
public interface MessageJpaRepository extends JpaRepository<MessageEntity, Long> {
// Should use the main-replica but always uses the read-replica
#Modifying
#Query(value =
"UPDATE clarioMessage SET" +
" body = :body" +
" WHERE id = :id" +
" AND organizationId = :organizationId",
nativeQuery = true)
#Transactional
int updateMessageBodyByIdAndOrganizationId(#Param("body") String body, #Param("id")long id, #Param("organizationId")long organizationId);
}
So I am just getting the error below when I try to use the main-replica. I have tried using the #UseDataSource annotation and AOP does actually intercept it. But, it still uses the read-replica.
java.sql.SQLSyntaxErrorException: UPDATE command denied to user 'read-replica-user'#'read replica IP' for table 'message'
What am I missing?
When you use #UseDataSource, it is working so it seems rules out any issues with implementation of aspect.
And When you #Transactional, it uses the secondary replica, regardless of your your AOP being invoked. My suspicion is by the TransactionInterceptor created by spring is invoked before your RouteDataSourceInterceptor. You can try the following:
Put a breakpoint in your aop method as well as a break point in org.springframework.transaction.interceptor.TransactionInterceptor.invoke method to see which one invokes first. You want your interceptor invoked first
If your interceptor is not invoked first, I would modify your interceptor to have high order as follows.
#Aspect
#Order(Ordered.HIGHEST_PRECEDENCE)
#Component
public class RouteDataSourceInterceptor {
I still don't understand how you are telling TransactionInterceptor to choose the DataSource you set in RouteDataSourceInterceptor. I have not used multi tenant setup but recently I came across a question which I helped to solve and I can see it is implementing AbstractDataSourceBasedMultiTenantConnectionProviderImpl. So I hope you have something similar. Not able to switch database after defining Spring AOP
This is a follow up question to Spring Integration Executor Channel using annotations code sample.
System diagram is attached .
I am trying to test the box highlighted in red by posting a message into 'Common channel' and reading from REPLY_CHANNEL set in the msg.
'Common channel' is a publish subscribe channel.
REPLY_CHANNEL is a QueueChannel.
Since this is a JUnit test, I have mocked jdbcTemplate, datasource and the Impl to ignore any DB calls.
My issue is:
When I post a message onto 'Common Channel', I do not receive any message on the REPLY_CHANNEL. The junit keeps waiting for a response.
What should I change to get a response on the REPLY_CHANNEL?
#RunWith(SpringRunner.class)
#SpringBootTest
#ContextConfiguration(loader = AnnotationConfigContextLoader.class) --------- 1
#ActiveProfiles("test")
public class QueuetoQueueTest {
#Configuration
static class ContextConfiguration { ------------------------------------- 2
#Bean(name = "jdbcTemplate")
public JdbcTemplate jdbcTemplate() {
JdbcTemplate jdbcTemplateMock = Mockito.mock(JdbcTemplate.class);
return jdbcTemplateMock;
}
#Bean(name = "dataSource")
public DataSource dataSource() {
DataSource dataSourceMock = Mockito.mock(DataSource.class);
return dataSourceMock;
}
#Bean(name = "entityManager")
public EntityManager entityManager() {
EntityManager entityManagerMock = Mockito.mock(EntityManager.class);
return entityManagerMock;
}
#Bean(name = "ResponseChannel")
public QueueChannel getReplyQueueChannel() {
return new QueueChannel();
}
//This channel serves as the 'common channel' in the diagram
#Bean(name = "processRequestSubscribableChannel")
public MessageChannel getPublishSubscribeChannel() {
return new PublishSubscribeChannel();
}
}
#Mock
DBStoreDaoImpl dbStoreDaoImpl;
#Test
public void testDBConnectivity() {
Assert.assertTrue(true);
}
#InjectMocks -------------------------------------------------------------- 3
StoretoDBConfig storetoDBConfig = new StoretoDBConfig();
#Autowired
#Qualifier("ResponseChannel")
QueueChannel ResponseChannel;
#Autowired
#Qualifier("processRequestSubscribableChannel")
MessageChannel processRequestSubscribableChannel;
#Before
public void setUp() throws Exception {
MockitoAnnotations.initMocks(this);
}
#Test
public void outboundtoQueueTest() {
try {
when(dbStoreDaoImpl.storeToDB(any()))
.thenReturn(1); ----------------------------------------------- 4
//create message
Message message = (Message<String>) MessageBuilder
.withPayload("Hello")
.setHeader(MessageHeaders.REPLY_CHANNEL, ResponseChannel)
.build();
//send message
processRequestSubscribableChannel.send(message);
System.out
.println("Listening on InstructionResponseHandlertoEventProcessorQueue");
//wait for response on reply channel
Message<?> response = ResponseChannel.receive(); ----------------------- 5
System.out.println("***************RECEIVED: "
+ response.toString());
} catch (Exception e) {
e.printStackTrace();
}
}
}
Load 'ContextConfiguration' for JUnit so that DB is not accessed.
This is how you load custom configuration in JUnit as per https://spring.io/blog/2011/06/21/spring-3-1-m2-testing-with-configuration-classes-and-profiles
Inside the config class, we mock jdbcTemplate, dataSource, entityManager and define the 'common channel' on which the request is posted and ResponseChannel.
Inject jdbcTemplate, dataSource mock into StoretoDBConfig so that the DB is not hit
Mock DaoImpl class so that DB calls are ignored
The test blocks here because there is no response on the REPLY_CHANNEL
UPDATED CODE:
Code inside 5 (the class that reads from common channel):
#Configuration
class HandleRequestConfig {
//Common channel - refer diagram
#Autowired
PublishSubscribeChannel processRequestSubscribableChannel;
//Step 9 - This channel is used to send queue to the downstream system
#Autowired
PublishSubscribeChannel forwardToExternalSystemQueue;
public void handle() {
IntegrationFlows.from("processRequestSubscribableChannel") // Read from 'Common channel'
.wireTap(flow->flow.handle(msg -> System.out.println("Msg received on processRequestSubscribableChannel"+ msg.getPayload())))
.handle(RequestProcessor,"validateMessage") // Perform custom business logic - no logic for now, return the msg as is
.wireTap(flow->flow.handle(msg -> System.out.println("Msg received on RequestProcessor"+ msg.getPayload())))
.channel("forwardToExternalSystemQueue"); // Post to 'Channel to another system'
}
}
//Code inside step 8 - 'Custom Business Logic'
#Configuration
class RequestProcessor {
public Message<?> validateMessage(Message<?> msg) {
return msg;
}
}
WHAT I AM TRYING TO ACHIEVE:
I have individual junit test cases for the business logic. I am trying to test that when the request is posted into the 'common channel', the response is received on 'channel to another system'.
Why I cannot use the original ApplicationContext: Because it connects to the DB, and I do not want my JUnit to connect to the DB or use an embedded database. I want any calls to the DB to be ignored.
I have set the reply channel to 'ResponseChannel', shouldn't the 'Custom Business Logic' send its response to 'ResponseChannel'?
If I have to listen on a different channel for the response, I am willing to do so. All I want to test is whether the message I am sending on 'common channel' is received on 'channel to other system'.
UPDATE 2:
Addressing Artem's questions.
Thankyou Artem for your suggestions.
Is 'HandlerRequestConfig' included in the test configuration? - We cannot directly call the handle() method. Instead I thought if I post on 'processRequestSubscribableChannel', the handle() method inside HandleRequestConfig will be invoked since it listens on the same channel. Is this wrong? How do I test HandleRequestConfig.handle() method then?
I added wiretap to the end of each step in HandleRequestConfig (code updated). I find that none of the wiretap message is printed. This means that the msg I am posting is not even reaching the input channel 'processRequestSubscribableChannel'. What am I doing wrong?
NOTE: I tried removing the 'processRequestSubscribableChannel' bean inside Configuration (so that the actual 'processRequestSubscribableChannel' in the applicationContext is used). I am getting an unsatisfied dependency error - Expected atleast 1 bean with configuration PublishSubscribeChannel.
Update 3: Posted details Artem requested.
#RunWith(SpringRunner.class)
#SpringBootTest
public class QueuetoQueueTest {
// Step 1 - Mocking jdbcTemplate, dataSource, entityManager so that it doesn't connect to the DB
#MockBean
#Qualifier("jdbcTemplate")
JdbcTemplate jdbcTemplate;
#MockBean
#Qualifier("dataSource")
public DataSource dataSource;
#MockBean
#Qualifier("entityManager")
public EntityManager entityManager;
#Bean(name = "ResponseChannel")
public PublishSubscribeChannel getReplyQueueChannel() {
return new PublishSubscribeChannel();
}
//Mocking the DB class
#MockBean
#Qualifier("dbStoreDaoImpl")
DBStoreDaoImpl dbStoreDaoImpl ;
//Inject the mock objects created above into the flow that stores data into the DB.
#InjectMocks
StoretoDBConfig storetoDBConfig = new StoretoDBConfig();
//Step 2 - Injecting MessageChannel used in the actual ApplicationContext
#Autowired
#Qualifier("processRequestSubscribableChannel")
MessageChannel processRequestSubscribableChannel;
#Before
public void setUp() throws Exception {
MockitoAnnotations.initMocks(this);
}
#Test
public void outboundtoQueueTest() {
try {
when(dbStoreDaoImpl.storeToDB(any()))
.thenReturn(1);
//create message
Message message = (Message<?>) MessageBuilder
.withPayload("Hello")
.build();
//send message - this channel is the actual channel used in ApplicationContext
processRequestSubscribableChannel.send(message);
} catch (Exception e) {
e.printStackTrace();
}
}
}
ERROR I AM GETTING: The code tries to connect to the DB and throws an error.
UPDATE 1: Code inside StoretoDBConfig
#Configuration
#EnableIntegration
public class StoretoDBConfig {
#Autowired
DataSource dataSource;
/*
* Below code is irrelevant to our current problem - Including for reference.
*
* storing into DB is delegated to a separate thread.
*
* #Bean
* public TaskExecutor taskExecutor() {
* return new SimpleAsyncTaskExecutor();
* }
*
* #Bean(name="executorChannelToDB")
* public ExecutorChannel outboundRequests() {
* return new ExecutorChannel(taskExecutor());
* }
* #Bean(name = "DBFailureChannel")
* public static MessageChannel getFailureChannel() {
* return new DirectChannel();
* }
* private static final Logger logger = Logger
* .getLogger(InstructionResponseHandlerOutboundtoDBConfig.class);
*/
#Bean
public IntegrationFlow handle() {
/*
* Read from 'common channel' - processRequestSubscribableChannel and send to separate thread that stores into DB.
*
/
return IntegrationFlows
.from("processRequestSubscribableChannel")
.channel("executorChannelToDB").get();
}
}
CODE THAT STORES INTO DB ON THE SEPARATE THREAD:
#Repository
public class DBStoreDaoImpl implements DBStoreDao {
private JdbcTemplate jdbcTemplate;
#Autowired
public void setJdbcTemplate(DataSource dataSource) {
this.jdbcTemplate = new JdbcTemplate(dataSource);
}
#Override
#Transactional(rollbackFor = Exception.class)
#ServiceActivator(inputChannel = "executorChannelToDB")
public void storetoDB(Message<?> msg) throws Exception {
String insertQuery ="Insert into DBTable(MESSAGE) VALUES(?)";
jdbcTemplate.update(insertQuery, msg.toString());
}
}
Please, show us what is subscribed to that Common channel. Your diagram somehow is not related to what you show us. The code you demonstrate is not full.
The real problem with the replyChannel that something really has to send a message to it. If your flow is just one-way - send, store and nothing to return, - then you indeed won't get anything for this one. That's why would to show those channel adapters.
The best way to observe the message journey is to turn on debug logging for the org.springframework.integration category.
Although I see that you declare those channels as is in the ContextConfiguration and there is really no any subscribers to the getRequestChannel. Therefore nobody is going to consume your message and, of course, nobody is going to send you a reply.
Please, reconsider your test class to use the real application context. Otherwise it is fully unclear what you would like to achieve if you really don't test your flow...
I need to perform load testing for my REST web service using Cucumber and Java. This REST web service accepts one input which is a String called id and returns complex JSON object.
I wrote a .feature file with Given, When and Then annotations which are defined in java.
The skeleton definition of the class and annotations are here under.
1) Feature (UserActivity.feature)
#functional #integration
Feature: User System Load Test
Scenario Outline: Load test for user data summary from third party UserSystem
Given Simultaneously multiple users are hitting XYZ services with an id=<ids>
When I invoke third party link with above id for multiple users simultaneously
Then I should get response code and response message for all users
Examples:
| ids |
| "pABC123rmqst" |
| "fakXYZ321rmv" |
| "bncMG4218jst" |
2) LoadTestStepDef.java (Feature definition)
package com.system.test.cucumber.steps;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.junit.runners.model.InitializationError;
import com.system.test.restassured.LoadTestUtil;
import cucumber.api.java.en.Given;
import cucumber.api.java.en.Then;
import cucumber.api.java.en.When;
public class LoadTestStepDef
{
private static Logger LOG = LogManager.getLogger( LoadTestStepDef.class );
private String id = null;
private LoadTestUtil service = null;
#Given("^Simultaneously multiple users are hitting XYZ services with an a id=\"(.*?)\"$" )
public void Simultaneously_multiple_users_are_hitting_XYZ_services_with_a_id( String id )
{
LOG.debug( "ID {}", id );
LOG.info( "ID {}", id );
this.id = id;
}
#When( "^I invoke third party link with above id for multiple users simultaneously$" )
public void invoke_third_party_link_With_Above_ID_for_multiple_users_simultaneously() throws InitializationError
{
LOG.debug( " *** Calling simulatenously {} ", id );
LOG.info( " *** Calling simulatenously {}", id );
//Create object of service
service = new LoadTestUtil();
//Set the id to the created service and invoke method
service.setData(id);
service.invokeSimultaneosCalls(10);
}
#Then( "^I should get response code and response message for all users$" )
public void Should_get_response_code_and_response_message_for_all_users()
{
LOG.info( "*** Assert for response Code" );
service.assertHeaderResponseCodeAndMessage();
}
}
3) LoadTestUtil.java
package com.system.test.restassured;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotNull;
import java.util.concurrent.Callable;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import com.jayway.restassured.path.json.JsonPath;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.FutureTask;
import java.util.concurrent.TimeUnit;
public class LoadTestUtil
{
private String id = null;
private int numberofTimes;
//Create List to hold all Future<Long>
private List<JsonPath> jsonResponseList = new ArrayList<JsonPath>();
//No arg Constructor
public LoadTestUtil()
{
}
//Set data method to set the initial id
public void setData(String id)
{
LOG.info( "LoadTestUtil.setData()", id );
this.id = id;
}
//This method is used call the REST webservice N times using threads and get response
public void invokeSimultaneosCalls(int numberofTimes)
{
LOG.info( "LoadTestUtil.invokeSimultaneosCalls() - Start" );
this.numberofTimes = numberofTimes;
try
{
long start = System.nanoTime();
int numberOfThreads = Runtime.getRuntime().availableProcessors();
LOG.info("Number of processor available {}" , numberOfThreads);
//Create pool for the Executor Service with numberOfThreads.
ExecutorService executor = Executors.newFixedThreadPool(numberOfThreads);
//Create a list to hold the Future object associated with Callable
List<Future<JsonPath>> futureList = new ArrayList<Future<JsonPath>>();
//Create new RESTServiceCallTask instance
Callable<JsonPath> callable = new RESTServiceCallTask(id);
Future<JsonPath> future = null;
//Iterate N number of times to submit the callable object
for(int count=1; count<=numberofTimes;count++)
{
//Submit Callable tasks to the executor
future = executor.submit(callable);
//Add Future to the list to get return value using Future
futureList.add(future);
}
//Create a flag to monitor the thread status. Check whether all worker threads are completed or not
boolean threadStatus = true;
while (threadStatus)
{
if (future.isDone())
{
threadStatus = false;
//Iterate the response obtained from the futureList
for(Future<JsonPath> futuree : futureList)
{
try
{
//print the return value of Future, notice the output delay in console
// because Future.get() waits for task to get completed
JsonPath response = futuree.get();
jsonResponseList.add(response);
}
catch(InterruptedException ie)
{
ie.printStackTrace();
}
catch(ExecutionException ee)
{
ee.printStackTrace();
}
catch(Exception e)
{
e.printStackTrace();
}
}//End of for to iterate the futuree list
} //End of future.isDone()
} //End of while (threadStatus)
//shut down the executor service now
executor.shutdown();
//Calculate the time taken by the threads for execution
executor.awaitTermination(1, TimeUnit.HOURS); // or longer.
long time = System.nanoTime() - start;
logger.info("Tasks took " + time/1e6 + " ms to run");
long milliSeconds = time / 1000000;
long seconds, minutes, hours;
seconds = milliSeconds / 1000;
hours = seconds / 3600;
seconds = seconds % 3600;
seconds = seconds / 60;
minutes = seconds % 60;
logger.info("Task took " + hours + " hours, " + minutes + " minutes and " + seconds + " seconds to complete");
} //End of try block
catch (Exception e)
{
e.printStackTrace();
}
LOG.info("LoadTestUtil.invokeSimultaneosCalls() - jsonResponseList {} " , jsonResponseList);
System.out.println("LoadTestUtil.invokeSimultaneosCalls() - jsonResponseList {} " + jsonResponseList);
LOG.info( "*** LoadTestUtil.invokeSimultaneosCalls() - End" );
}
public void assertHeaderResponseCodeAndMessage(){
//Number of response objects available
int size = jsonResponseList.size();
LOG.info("Number of REST service calls made = ", size);
for(JsonPath jsonResponse : jsonResponseList)
{
String responseCode = jsonResponse.get( "header.response_code").toString();
String responseMessage = jsonResponse.get( "header.response_message").toString();
assertEquals( "200", responseCode);
assertEquals( "success", responseMessage);
}
}
}
4) RESTServiceCallTask.java
This class implements Callable and override the call() method.
In the call() method, the response in the form of JsonPath is returned for each call
package com.system.test.restassured;
import static com.jayway.restassured.RestAssured.basePath;
import static com.jayway.restassured.RestAssured.baseURI;
import static com.jayway.restassured.RestAssured.given;
import static com.jayway.restassured.RestAssured.port;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import com.system.test.restassured.TestUtil;
import com.jayway.restassured.path.json.JsonPath;
import com.jayway.restassured.response.Response;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.Callable;
public class RESTServiceCallTask implements Callable<JsonPath>
{
private static Logger LOG = LogManager.getLogger(RESTServiceCallTask.class);
private Response response = null;
private String id;
private String environment;
//private JsonPath jsonPath;
/**
* Constructor initializes the call to third party system
*
* #param id
*/
public RESTServiceCallTask(String id)
{
LOG.info("In RESTServiceCallTask() constructor ");
this.id = id;
//Read the environment variable ENV to get the corresponding environment's REST URL to call
this.environment = System.getProperty("ENV");
baseURI = TestUtil.getbaseURL(environment);
basePath = "/bluelink/tracker/member_summary";
port = 80;
LOG.info(" *** Environment : {}, URI: {} and Resource {} ", environment, baseURI, basePath);
}
//This method is called by the threads to fire the REST service and returns JSONPath for each execution
#Override
public JsonPath call() throws Exception
{
LOG.info(" *** In call() method ");
try
{
response = given().headers("id", this.id).log().all().get();
} catch (Exception e)
{
LOG.error("System Internal Server Error", e);
}
String strResponse = this.response.asString();
LOG.info("Response : {}", strResponse);
JsonPath jsonResponse = new JsonPath(strResponse);
return jsonResponse;
}
}
5) TestUtil.java
This utility class is used to get the REST URL corresponding to the passed environment
package com.system.test.restassured;
import java.util.HashMap;
import java.util.Map;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
public class TestUtil
{
private static Logger LOG = LogManager.getLogger(TestUtil.class);
private static final Map<String, String> ENVIRONMENT_MAP = new HashMap<String, String>();
static
{
ENVIRONMENT_MAP.put("LOCAL", "http://localhost:9080");
ENVIRONMENT_MAP.put("ENV1", "http://localhost:9080");
ENVIRONMENT_MAP.put("ENV2", "http://localhost:9080");
ENVIRONMENT_MAP.put("ENV3", "http://localhost:9080");
}
public static String getbaseURL(String environment)
{
LOG.info("Environment value fetched = {}", environment);
return ENVIRONMENT_MAP.get(environment);
}
}
The problem here is that the multi-threading feature is not getting executed.
I used the MavenSurefire Plugin and tried with parallel classes and methods. In those cases also the above scenario doesn't work.
Does Cucumber support java multi-threading? If so what is wrong with the above feature definition?
Note - The same task is performed with stand alone program and able to run for 10,000 times
using 4 threads without any issues. However not able to run the above code for 2000 times using Maven. With 2000 times, the system crashed abruptly.
I am using Rational Application Developer 8.5, Websphere Server 8.0 with Maven 3.x for the above setup.
Thanks for your response.
Im trying to run a mapreduce program in hadoop. Basically it takes in a text file as input in which each line is a json text. Im using simple json to parse this data in my mapper and the reducer does some other stuff. I have included the simple json jar file in hadoop/lib folder. here is the code below
package org.myorg;
import java.io.IOException;
import java.util.Iterator;
import java.util.*;
import org.json.simple.JSONArray;
import org.json.simple.JSONObject;
import org.json.simple.parser.JSONParser;
import org.json.simple.parser.ParseException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
public class ALoc
{
public static class AMapper extends Mapper<Text, Text, Text, Text>
{
private Text kword = new Text();
private Text vword = new Text();
JSONParser parser = new JSONParser();
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException{
try {
String line = value.toString();
Object obj = parser.parse(line);
JSONObject jsonObject = (JSONObject) obj;
String val = (String)jsonObject.get("m1") + "," + (String)jsonObject.get("m3");
kword.set((String)jsonObject.get("m0"));
vword.set(val);
context.write(kword, vword);
}
catch (IOException e) {
e.printStackTrace();
}
catch (ParseException e) {
e.printStackTrace();
}
}
}
public static class CountryReducer
extends Reducer<Text,Text,Text,Text>
{
private Text result = new Text();
public void reduce(Text key, Iterable<Text> values,
Context context
) throws IOException, InterruptedException
{
int ccount = 0;
HashMap<Text, Integer> hm = new HashMap<Text, Integer>();
for (Text val : values)
{
if(hm.containsKey(val)){
Integer n = (Integer)hm.get(val);
hm.put(val, n+1);
}else{
hm.put(val, new Integer(1));
}
}
Set set = hm.entrySet();
Iterator i = set.iterator();
String agr = "";
while(i.hasNext()) {
Map.Entry me = (Map.Entry)i.next();
agr += "|" + me.getKey() + me.getValue();
}
result.set(agr);
context.write(key, result);
}
}
public static void main(String[] args) throws Exception
{
Configuration conf = new Configuration();
Job job = new Job(conf, "ALoc");
job.setJarByClass(ALoc.class);
job.setMapperClass(AMapper.class);
job.setReducerClass(CountryReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
job.setInputFormatClass(TextInputFormat.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
When I try to run the job. It gives the following error.
I am running this in a aws micro instance single node.
I have been following this tutorial http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
hadoop#domU-18-11-19-02-92-8E:/$ bin/hadoop jar ALoc.jar org.myorg.ALoc /user/hadoop/adata /user/hadoop/adata-op5 -D mapred.reduce.tasks=16
13/02/12 08:39:50 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/02/12 08:39:50 INFO input.FileInputFormat: Total input paths to process : 1
13/02/12 08:39:50 INFO util.NativeCodeLoader: Loaded the native-hadoop library
13/02/12 08:39:50 WARN snappy.LoadSnappy: Snappy native library not loaded
13/02/12 08:39:51 INFO mapred.JobClient: Running job: job_201302120714_0006
13/02/12 08:39:52 INFO mapred.JobClient: map 0% reduce 0%
13/02/12 08:40:10 INFO mapred.JobClient: Task Id : attempt_201302120714_0006_m_000000_0, Status : FAILED
java.lang.RuntimeException: Error while running command to get file permissions : java.io.IOException: Cannot run program "/bin/ls": java.io.IOException: error=12, Cannot allocate memory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:475)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:200)
at org.apache.hadoop.util.Shell.run(Shell.java:182)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:710)
at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:443)
at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:426)
at org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:267)
at org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
at org.apache.hadoop.mapred.Child$4.run(Child.java:260)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.io.IOException: java.io.IOException: error=12, Cannot allocate memory
at java.lang.UNIXProcess.<init>(UNIXProcess.java:164)
at java.lang.ProcessImpl.start(ProcessImpl.java:81)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:468)
... 15 more
at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:468)
at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:426)
at org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:267)
at org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
at org.apache.hadoop.mapred.Child$4.run(Child.java:260)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
I guess you must be trying hadoop on Micro instance which have very less memory (~700MB).
Try increasing the HADOOP Heapsize parameter (in hadoop/conf/hadoop-env.sh) .. as the basic reason is shortage of memory required to fork processes