I'm using Hibernate and Spring with the DAO pattern (all Hibernate dependencies in a *DAO.java class). I have nine unit tests (JUnit) which create some business objects, save them, and perform operations on them; the objects are in a hash (so I'm reusing the same objects all the time).
My JUnit setup method calls my DAO.deleteAllObjects() method which calls getSession().createSQLQuery("DELETE FROM <tablename>").executeUpdate() for my business object table (just one).
One of my unit tests (#8/9) freezes. I presumed it was a database deadlock, because the Hibernate log file shows my delete statement last. However, debugging showed that it's simply HibernateTemplate.save(someObject) that's freezing. (Eclipse shows that it's freezing on HibernateTemplate.save(Object), line 694.)
Also interesting to note is that running this test by itself (not in the suite of 9 tests) doesn't cause any problems.
How on earth do I troubleshoot and fix this?
Also, I'm using #Entity annotations, if that matters.
Edit: I removed reuse of my business objects (use unique objects in every method) -- didn't make a difference (still freezes).
Edit: This started trickling into other tests, too (can't run more than one test class without getting something freezing)
Edit: Breaking the freezing tests into two classes works. I'm going to do that for now, as shamefully un-DRY as it is to have two or more test classes unit-testing the same one business object class.
Transaction configuration:
<bean id="txManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource" />
</bean>
<tx:advice id="txAdvice" transaction-manager="txManager">
<!-- the transactional semantics... -->
<tx:attributes>
<!-- all methods starting with 'get' are read-only -->
<tx:method name="get*" read-only="true" />
<tx:method name="find*" read-only="true" />
<!-- other methods use the default transaction settings (see below) -->
<tx:method name="*" />
</tx:attributes>
</tx:advice>
<!-- my bean which is exhibiting the hanging behavior -->
<aop:config>
<aop:pointcut id="beanNameHere"
expression="execution(* com.blah.blah.IMyDAO.*(..))" />
<aop:advisor advice-ref="txAdvice" pointcut-ref="beanNameHere" />
</aop:config>
When the freeze happens break the application, find the main thread and capture the stacktrace. Poke through until you find exactly what DB query is running that is blocking in the DB.
You mention running the test on it's own works ok but running the full suite causes a problem. If this is the case then I would guess that one of the prior tests still has a transaction open and has locks on some rows which the blocking test is trying to access.
Do your tests run concurrently? If so stop doing that as they could interfere with each other.
Turn on hibernate.show_sql option so you can see in the console all the SQL being generated.
At the point the freeze happens can you find out which rows are locked in the DB. e.g. in SQLServer you can run sp_lock to see this and sp_who to see which SQL process ids are blocking on another.
A few things to check:
proper transaction management - it appears that in your configuration you have transactions over a DAO of yours. Generally it is advisable to have transactions around your service layer, and not the DAO. But anyway - make sure you have a transaction around the dao in use by the test. Or make the test #Transactional (if using spring's junit runner)
change the logging treshold to info for the datasource (c3p0, perhaps?). It reports deadlocks.
watch the database logs for deadlocks (if there is such option)
Related
Recently I created a framework with Cucumber-JUnit where I am able to execute Scenarios in parallel (For now keeping one scenario per feature) without any issue.
Now I have a situation where some of the features has to run in parallel and some in sequence.
Is there any way that we can control with tags or any other configuration to choose which has to run in parallel or sequence?
Let me put some overview how it is currently
Parallel and its thread size controlled as per cucumber official documentation - Maven surefire
pom.xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>3.0.0-M5</version>
<configuration>
<parallel>methods</parallel>
<threadCount>${threadSize}</threadCount>
<perCoreThreadCount>false</perCoreThreadCount>
</configuration>
</plugin>
CucumberRunner:
#RunWith(Cucumber.class)
#CucumberOptions(
features = {"src/test/resources/features"},
glue = {"com.tests.binding.steps"},
tags = "#regression"
)
public class RunCucumberFeatures {
}
command using to run tests
mvn clean test -Dcucumber.filter.tags="${toExecute} and not (#smoke)" -DthreadCount=${ThreadSize} -Dcucumber.execution.dry-run="false"
For toExecute parameter - we pass multiple tags like #customerClaim or #employeeClaim
Now, In my case features with tags #employeeClaim should execute in parallel and tags with #customerClaim should execute in sequence.
Is it possible with current design or any other way?
Is there any way that we can control with tags or any other configuration to choose which has to run in parallel or sequence?
Not with cucumber-junit and JUnit 4. However with JUnit 5 you can use the cucumber-junit-platform-engine and use JUnit 5s support for exclusive resources.
https://github.com/cucumber/cucumber-jvm/tree/main/junit-platform-engine
To synchronize a scenario on a specific resource, the scenario must be tagged and this tag mapped to a lock for the specific resource. A resource is identified by an arbitrary string and can be either locked with a read-write-lock, or a read-lock.
For example, the following tags:
Feature: Exclusive resources
#reads-and-writes-system-properties
Scenario: first example
Given this reads and writes system properties
When it is executed
Then it will not be executed concurrently with the second example
#reads-system-properties
Scenario: second example
Given this reads system properties
When it is executed
Then it will not be executed concurrently with the first example
with this configuration:
cucumber.execution.exclusive-resources.reads-and-writes-system-properties.read-write=java.lang.System.properties
cucumber.execution.exclusive-resources.reads-system-properties.read=java.lang.System.properties
when executing the first scenario tagged with #reads-and-writes-system-properties will lock the java.lang.System.properties resource with a read-write lock and will not be concurrently executed with the second scenario that locks the same resource with a read lock.
Note: The # from the tag is not included in the property name. Note: For canonical resource names see junit5/Resources.java
So by making an exclusive resource for #customerClaim you can prevent these scenarios from running in parallel. However to the best of my knowledge JUnit 5 makes no guarantees made about the order of execution so they should still be independent from each other.
It would be nice if we could select which tests to run in parallel only by tags. I will be fighting with microservices very soon and I would like most of my tests to run in parallel to save time. So far I have been testing a monolith and my framework is already done with multiple .feature files and tags configured to run on different environments. So if there is an easy way to configure which tests to run in parallel and the rest in sequence using tags - it would be nice to know.
Brief
I'm looking to write unit tests in JUnit to check individual drools rules. A unit test should be simple to write and fast to run. If the unit tests are slow then developers will avoid running them and the build will become excessively slow. With that in mind I'm trying to figure out the best (fastest to execute and easiest to write) method of writing these unit tests.
First attempt
The first option I tried was to create the KnowledgeBase as a static class attribute and initialized on one .drl file. Each test then creates a new session in the #Before method. This was based on the code examples in the Drools JBoss rules developer guide.
I've seen a second option that tidies this up a bit by creating some annotations to abstract the initialization code but it's basically the same approach.
I noticed that this basic unit test on one .drl file was taking a couple of seconds to run. This isn't too bad with one unit test but once it's scaled up I can see it being a problem. I did some reading and found that the KnowledgeBase is expensive to create, whereas the session is cheap. This is why the examples have the KnowledgeBase as static so it's created only once, however, with multiple unit test classes it will potentially be created many times.
Alternative
The alternative I tried is to create a singleton KnowledgeBase that will load all .drl files. This will be done once globally for the test suite and then autowired into each test class. I used a spring #Configuration class and defined the KnowledgeBase as an #Bean. I now find that the KnowledgeBase takes 2 seconds (but runs only once), the session creation takes about 0.2 seconds and the test itself takes no time at all.
It seems as if the spring approach may scale better but I'm not sure if I will get other problems with testing a single rule but using a KnowledgeBase initialized on all files? I'm using an AgendaFilter to target the specific rule I want to test. Also I've searched online quite a bit and haven't found anybody else doing it this way.
Summary
What will be the most scalable way of writing these tests? Thanks
This is a very nice collection of experiences. Let me contribute some ideas.
If you have a scenario where a large number of rules is essential to test the outcome because rules vie with each other, you should create the KieBase containing all and serialize it, once. For individual tests, either derive a session from it for each test, inserting facts and firing all rules, or, alternatively, derive the session up front, and run tests, clearing the session (!), inserting facts and firing all rules.
For testing a single rule or a small (<10) set of rules, compiling the DRL from scratch for each set of tests may be tolerable, especially when you adopt the strategy to reuse the session (!) by clearing Working Memory between individual tests.
Some consideration should also be given to rule design. Excessive iterative algorithms should not be implemented in DRL by hook or by crook; using a DRL function or some (static) Java method may be far superior. And testing such as these is much easier.
Also, following established rule design patterns helps a lot. Google "Design Patterns in Production Systems".
MOTIVATION: at the moment I have multiple test suites written in Java which I can build and run by Ant. I develop and polish tests on my laptop and they run every night on a remote machine in the cloud. Sometimes it may happen for various reasons that locally some specific test passes but it permanently fails in the cloud. Then I need to debug. How I do it now: I add #Ignore annotation to all tests except the one I need to debug, commit my code, then connect remotely to the machine in the cloud to see what is going on on my laptop and then run the suite in the cloud. This way only one test from the suite executes and via RDP I can see what is failing. This is something I can live with but it bothers me to add these #Ignore annotations and then remove them from all tests except for one when debugging. Would be great if there is a way in Ant to tell the target not to run a specific class which executes all tests but only one particular test instead.
Example of my Ant target for one particular suite:
<target name="ADMPSmokeTest">
<mkdir dir="${junit.output.dir}"/>
<junit fork="yes" printsummary="withOutAndErr" showoutput="true">
<formatter type="xml"/>
<test name="me.enreach.automation.sanoma.SmokeTest" todir="${junit.output.dir}"/>
<classpath refid="QARegression.classpath"/>
</junit>
</target>
As you can see the <test name="me.enreach.automation.sanoma.SmokeTest" todir="${junit.output.dir}"/> says it needs to run all tests in the SmokeTest class.
Can I somehow tell it to run only let's say SmokeTest.createCampaign ?
Please advise only solutions which really can save time, do not say things like "create separate test suite when you need to debug", etc.
Yes, there is. You can add a methods attribute to the test element. This is supposed to allow you to provide a comma separated list of method names from the class specified in the name attribute, though I have not been able to get it to work with more than one method.
Example:
<test name="me.enreach.automation.sanoma.SmokeTest" methods="createCampaign"/>
Support for methods was added in Ant 1.8.2.
Can we use JUnit to test java batch jobs? Since Junit runs locally and java batch jobs run on the server, i am not sure how to start a job (i tried using using the JobOperator class) from JUnit test cases.
If JUnit is not the right tool, how can we unit test java batch code.
I am using using IBM's implementation of JSR 352 running on WAS Liberty
JUnit is first of all an automation and test monitor framework. Meaning: you can use it to drive all kinds of #Test methods.
From an conceptual point, the definition of unit tests is pretty vague; if you follow wikipedia, "everything you do to test something" can be seen as unit test. Following that perspective, of course, you can "unit test" batch code that runs on a batch framework.
But: most people think that "true", "helpful" unit tests do not require the presence of any external thing. Such tests can be run "locally" at build time. No need for servers, file systems, networking, ...
Keeping that in mind, I think there are two things you can work with:
You can use JUnit to drive "integration" or "functional tests". Meaning: you can define test suites that do the "full thing" - define batches, have them processed to check for expected results in the end. As said, that would be integration tests that make sure the end-to-end flow works as expected.
You look into"normal" JUnit unit-testing. Meaning: you focus on those aspects in your code that are "un-related" to the batch framework (in other words: look out for POJOs) and unit-test those. Locally; maybe with mocking frameworks; without relying on a real batch service running your code.
Building on the answer from #GhostCat, it seems you're asking how to drive the full job (his bullet 1.) in your tests. (Of course unit testing the reader/processor/writer components individually can also be useful.)
Your basic options are:
Use Arquillian (see here for a link on getting started with Arquillian and Liberty) to run your tests in the server but to let Arquillian handle the tasks of deploying the app to the server and collecting the results.
Write your own servlet harness driving your job through the JobOperator interface. See the answer by #aguibert to this question for a starting point. Note you'll probably want to write your own simple routine polling the JobExecution for one of the "finished" states (COMPLETED, FAILED, or STOPPED) unless your jobs have some other means of making the submitter aware.
Another technique to keep in mind is the startup bean. You can run your jobs simply by starting the server with a startup bean like:
#Startup
#Singleton
public class StartupBean {
JobOperator jobOp = BatchRuntime.getJobOperator();
// Drive job(s) on startup.
jobOp.start(...);
This can be useful if you have a way to check the job results separate from using the JobOperator interface (for which you need to be in the server). Your tests can simply poll and check for the job results. You don't even have to open an HTTP port, and the server startup overhead is only a few seconds.
I use FlexUnit 4.1 with Adobe's TestRunnerBase to run a suite of integration tests to verify the integrity of a 3-tier BlazeDS/Java EE/MySQL server.
To bypass the security checks enforced by Apache Shiro while running those tests, I have configured two separate test runs: One that logs in as root, one that performs the actual integration tests.
Because of the way that BlazeDS handles duplicate sessions (this is an issue for another question, or rather, it has been already), sometimes the login mechanism fails - in which case I would like the TestRunner to suspend all further activities.
I have looked all over for some way to configure FlexUnitCore to stop on a test failure, but to no avail. Also, there seem to be events only for TEST_START and TEST_COMPLETE, but not for TEST_FAIL.
Is there some other way to find out if a test failed, to stop the runner?
First time for me - I stumbled upon the solution to my problem while I was writing my question: There is an IRunListener interface that can be implemented to react to all sorts of information sent by the TestRunner. Then we simply use FlexUnitCore#addListener() to initialize it, the same way we do it with the UIListener, TraceListener, CIListener, etc. that Adobe provides.