Stopping FlexUnit test run, if a test fails? - actionscript-3

I use FlexUnit 4.1 with Adobe's TestRunnerBase to run a suite of integration tests to verify the integrity of a 3-tier BlazeDS/Java EE/MySQL server.
To bypass the security checks enforced by Apache Shiro while running those tests, I have configured two separate test runs: One that logs in as root, one that performs the actual integration tests.
Because of the way that BlazeDS handles duplicate sessions (this is an issue for another question, or rather, it has been already), sometimes the login mechanism fails - in which case I would like the TestRunner to suspend all further activities.
I have looked all over for some way to configure FlexUnitCore to stop on a test failure, but to no avail. Also, there seem to be events only for TEST_START and TEST_COMPLETE, but not for TEST_FAIL.
Is there some other way to find out if a test failed, to stop the runner?

First time for me - I stumbled upon the solution to my problem while I was writing my question: There is an IRunListener interface that can be implemented to react to all sorts of information sent by the TestRunner. Then we simply use FlexUnitCore#addListener() to initialize it, the same way we do it with the UIListener, TraceListener, CIListener, etc. that Adobe provides.

Related

How to take screenshot on test failure with junit 5

Can someone tell me please: how to take a screenshot when test method fails (jUnit 5). I have a base test class with BeforeEach and AfterEach methods. Any other classes with #Test methods extends base class.
Well, it is possible to write java code that takes screenshots, see here for example.
But I am very much wondering about the real problem you are trying to solve this way. I am not sure if you figured that yet, but the main intention of JUnit is to provide you a framework that runs your tests in various environments.
Of course it is nice that you can run JUnit within your IDE, and maybe you would find it helpful to get a screenshot. But: "normally" unit tests also run during nightly builds and such - in environments where "taking a screenshot" might not make any sense!
Beyond that: screenshorts are an extremely ineffective way of collecting information! When you have a fail, you should be locking for textual log files, html/xml reports, whatever. You want that failing tests generate information that can be easily digested.
So, the real answer here is: step back from what you are doing right now, and re-consider non-screenshot solutions to the problem you actually want to solve!
You don't need to take screen shots for JUnit test failes/passes, rather the recommended way is to generate various reports (Tests Passed/Failed Report, Code coverage Report, Code complexity Report etc..) automatically using the below tools/plugins.
You can use Cobertura maven plugin or Sonarqube code quality tool so that these will automatically generate the reports for you.
You can look here for Cobertura-maven-plugin and here for Sonarqube for more details.
You need to integrate these tools with your CI (Continuous Integration) environments and ensure that if the code is NOT passing certain quality (in terms of tests coverage, code complexity, etc..) then the project build (war/ear) should fail automatically.

JSR:352 Unit testing Java Batch Code?

Can we use JUnit to test java batch jobs? Since Junit runs locally and java batch jobs run on the server, i am not sure how to start a job (i tried using using the JobOperator class) from JUnit test cases.
If JUnit is not the right tool, how can we unit test java batch code.
I am using using IBM's implementation of JSR 352 running on WAS Liberty
JUnit is first of all an automation and test monitor framework. Meaning: you can use it to drive all kinds of #Test methods.
From an conceptual point, the definition of unit tests is pretty vague; if you follow wikipedia, "everything you do to test something" can be seen as unit test. Following that perspective, of course, you can "unit test" batch code that runs on a batch framework.
But: most people think that "true", "helpful" unit tests do not require the presence of any external thing. Such tests can be run "locally" at build time. No need for servers, file systems, networking, ...
Keeping that in mind, I think there are two things you can work with:
You can use JUnit to drive "integration" or "functional tests". Meaning: you can define test suites that do the "full thing" - define batches, have them processed to check for expected results in the end. As said, that would be integration tests that make sure the end-to-end flow works as expected.
You look into"normal" JUnit unit-testing. Meaning: you focus on those aspects in your code that are "un-related" to the batch framework (in other words: look out for POJOs) and unit-test those. Locally; maybe with mocking frameworks; without relying on a real batch service running your code.
Building on the answer from #GhostCat, it seems you're asking how to drive the full job (his bullet 1.) in your tests. (Of course unit testing the reader/processor/writer components individually can also be useful.)
Your basic options are:
Use Arquillian (see here for a link on getting started with Arquillian and Liberty) to run your tests in the server but to let Arquillian handle the tasks of deploying the app to the server and collecting the results.
Write your own servlet harness driving your job through the JobOperator interface. See the answer by #aguibert to this question for a starting point. Note you'll probably want to write your own simple routine polling the JobExecution for one of the "finished" states (COMPLETED, FAILED, or STOPPED) unless your jobs have some other means of making the submitter aware.
Another technique to keep in mind is the startup bean. You can run your jobs simply by starting the server with a startup bean like:
#Startup
#Singleton
public class StartupBean {
JobOperator jobOp = BatchRuntime.getJobOperator();
// Drive job(s) on startup.
jobOp.start(...);
This can be useful if you have a way to check the job results separate from using the JobOperator interface (for which you need to be in the server). Your tests can simply poll and check for the job results. You don't even have to open an HTTP port, and the server startup overhead is only a few seconds.

How to setup exception handlers for an existing JUnit4 project?

I have a JUnit project with 100s of lines of code and 100s of test cases already automated. But this project does not have any exceptions managed code. Hence while running the test cases from Eclipse, if there was an exception like NullPointer, etc, the execution halts in Eclipse with the error message of the Exception. I would like to handle these unhandled Exceptions in my project so that these messages are logged. Is there an option to handle these exceptions globally which I can setup for this project. Since it has 100s of methods it would be difficult to add exceptions for each and every method that already exists.
My project runs in Java 7 + JUnit 4
Any pointers will of great help. Thanks in advance!
Update 1:
I found a solution by creating a class that implements TestRule. But in order to get this working, I will have to add #Rule statements in all of my existing test scripts which are 100s of files.
Is there a work around for this so that i can avoid editing all those 100s test scripts.
Regards,
Janaki

Handling "Internal server error" in Groovy-Console

I have a groovy-script which takes about 5 hours to complete (it restarts (delete old and start new) many workflows), and unfortunately there are some workflows which can't get processed and throw an "internal Server error" which ends the groovy call.
All I can do now is to take a look at the logs and restart the groovy script and exclude the problematic workflow-id.
It would be a great performance-boost, if I could catch this "internal server error" in the hac and continue with the next workflow instead of aborting the skript.
I already tried to put it in try/catch, but this doesn't work.
Is there any chance to "ignore" the "internal server error"s - entries of my list to process?
Thanks for any help!
Run the Groovy script natively, not through the HAC. The Groovy/Beanshell consoles are handy for quick prototypes, but running a 5-hr process through a browser interface seems kludgy at best. You have at least a couple options:
Dynamic Beans
Did you know that Spring beans can be implemented using a number of various languages using Dynamic language beans?
Define interfaces for your processes and wire them up to Groovy implementations using the Spring configuration. Since the scripts are interpreted at runtime, you can swap out code without needing to recompile the entire platform.
Now you have the full power of Java, Spring, Groovy, and hybris. Properly sequester each process so that exceptions don't bubble up and crash the entire thing.
This option would be the cleanest way to go, since you'd be integrating the code directly into the project's codebase. And you can keep all your existing [ Groovy | JRuby | Beanshell | ... ] code.
Roll your own
Another thing you might try is examining hybris' Groovy API. I was able to leverage hybris' Beanshell interpreter classes to create my own test harness. It is a simple standalone Eclipse project that allows me to write and run Beanshell within Eclipse, with output to the console. I use it on a daily basis for quick scripting tasks like batch updates, FlexibleSearch queries, etc. I'd imagine you could do the same thing with Groovy. Search the hybris API for the HAC code that interprets the Groovy requests from the browser.
The sky's the limit, but first get out of the browser console for heavy scripting tasks.
My short answer would be: Don't use scripts for time-consuming processes.
Although you mentioned that is not possible to define standard scripts, because Business is working in parallel, I cannot recommend maintaining a live system in this manner.
Integrate that logic into a custom CronJob and add all configurable/dynamic things as properties of said Job.
The benefit of this approach would be
you have a proper logging mechanism (Sysout in HAC Groovy console sux)
you can trace your execution (time consumed, started, stopped, etc.)
can be triggered automatically (CronJob Trigger) or by other instructed user (eg Operations)
you get a more stable workflow as a whole (that is, no need of keeping track of those magic scripts (how do you version them? in the resource folder?))
The downside of this would be indeed, that you need a redeploy.
From my experience, dynamically changed code (Dynamic Beans as an example) works on projects with comparably low complexity, but tends to get messy pretty quickly.

Hudson - save artifacts only when less than 90% passes

I am new at this and I was wondering how I can setup that I save the artifacts, only if less than 90% of the tests have passed.
Any idea how I can do this?
thanks
This is not currently possible with Hudson. What is the motivation to avoid archiving artifacts on every build?
How about a rather simple workaround. You create a post build step (or additional build step) that calls your tests from the command line. Be sure to capture all errors so Hudson don't count it as a failure. Than you evaluate your condition and set the error level accordingly. In addition you need to save reports (probably outside hudson) before you set the error level, so they are available even or only when the build fails.
My assumption here is, that it is OK, not to run the tests when building the app fails. However, you can separate the building and testing in two jobs. See here.