mysql -how can we store Testng results to database - mysql

currently working with a very large application to test (several custom programs, running in a distributed environment), and has built up a very large set of automated test cases for regression and feature testing. These tests are large and there are a lot, so full test runs are dispatched across many machines, the results gathered, and then imported into a custom web app.
technologies: java/selenium/ant/testng/jenkins
reports: testng,reportng,xslt
how to store results in database(eg: mysql)?

Create a custom Reporter Listener by extending the org.testng.TestListenerAdapter and override the onTestSuccess, onTestFailure and onTestSkipped methods and log the results of the tests there to mySQL. After that you have to add your custom Reporter as a Listener.
You can find on the TestNG's website, how can you define a custom Listener:
http://testng.org/doc/documentation-main.html#testng-listeners
And here you can find how can you override the TestListenerAdapter:
http://testng.org/doc/documentation-main.html#logging-listeners

Related

Reportportal results grouped by test environment

On our project we try to collect all tests results into reportportal.
For unit tests we applied java agent for junit and api tests are executed over gauge framework. Since our project has several tests environments before the application is shipped to production i would like to display the results aggregated per environment. Like local, development, staging, e2e, production and group the results accordingly. Is there such feature available?
Since reportportal v5.1 supports attributes that can be passed over api
rp.attributes = key:value; value;
Hence passing environment:foo.
On these attributes filters can be applied.

Where to create KnowledgeBase in a Drools unit test?

Brief
I'm looking to write unit tests in JUnit to check individual drools rules. A unit test should be simple to write and fast to run. If the unit tests are slow then developers will avoid running them and the build will become excessively slow. With that in mind I'm trying to figure out the best (fastest to execute and easiest to write) method of writing these unit tests.
First attempt
The first option I tried was to create the KnowledgeBase as a static class attribute and initialized on one .drl file. Each test then creates a new session in the #Before method. This was based on the code examples in the Drools JBoss rules developer guide.
I've seen a second option that tidies this up a bit by creating some annotations to abstract the initialization code but it's basically the same approach.
I noticed that this basic unit test on one .drl file was taking a couple of seconds to run. This isn't too bad with one unit test but once it's scaled up I can see it being a problem. I did some reading and found that the KnowledgeBase is expensive to create, whereas the session is cheap. This is why the examples have the KnowledgeBase as static so it's created only once, however, with multiple unit test classes it will potentially be created many times.
Alternative
The alternative I tried is to create a singleton KnowledgeBase that will load all .drl files. This will be done once globally for the test suite and then autowired into each test class. I used a spring #Configuration class and defined the KnowledgeBase as an #Bean. I now find that the KnowledgeBase takes 2 seconds (but runs only once), the session creation takes about 0.2 seconds and the test itself takes no time at all.
It seems as if the spring approach may scale better but I'm not sure if I will get other problems with testing a single rule but using a KnowledgeBase initialized on all files? I'm using an AgendaFilter to target the specific rule I want to test. Also I've searched online quite a bit and haven't found anybody else doing it this way.
Summary
What will be the most scalable way of writing these tests? Thanks
This is a very nice collection of experiences. Let me contribute some ideas.
If you have a scenario where a large number of rules is essential to test the outcome because rules vie with each other, you should create the KieBase containing all and serialize it, once. For individual tests, either derive a session from it for each test, inserting facts and firing all rules, or, alternatively, derive the session up front, and run tests, clearing the session (!), inserting facts and firing all rules.
For testing a single rule or a small (<10) set of rules, compiling the DRL from scratch for each set of tests may be tolerable, especially when you adopt the strategy to reuse the session (!) by clearing Working Memory between individual tests.
Some consideration should also be given to rule design. Excessive iterative algorithms should not be implemented in DRL by hook or by crook; using a DRL function or some (static) Java method may be far superior. And testing such as these is much easier.
Also, following established rule design patterns helps a lot. Google "Design Patterns in Production Systems".

Which kind of test should I use for a library?

I'm developing a PHP library that I'd like to use in different projects. The library uses a REST-like service in the background. I don't want to write tests for the service API, but for the library.
Would I need to write unit tests? Or functional tests? Since it is a library I won't write acceptance test - I hope this is correct.
I don't know if this is important for the issue, but the library needs to login into the service API and uses an API-key for the next operations. Also, when the library gets tested, the operations before are important. It is a designer tool and I have operations like 'move rectangle', 'rotate rectangle' and so on and I would like to test several operations in a sequence that should bring a certain result.
I think that this is a kind of functional test. Or do I need both? Can unit tests work with a service in the background?

JSR:352 Unit testing Java Batch Code?

Can we use JUnit to test java batch jobs? Since Junit runs locally and java batch jobs run on the server, i am not sure how to start a job (i tried using using the JobOperator class) from JUnit test cases.
If JUnit is not the right tool, how can we unit test java batch code.
I am using using IBM's implementation of JSR 352 running on WAS Liberty
JUnit is first of all an automation and test monitor framework. Meaning: you can use it to drive all kinds of #Test methods.
From an conceptual point, the definition of unit tests is pretty vague; if you follow wikipedia, "everything you do to test something" can be seen as unit test. Following that perspective, of course, you can "unit test" batch code that runs on a batch framework.
But: most people think that "true", "helpful" unit tests do not require the presence of any external thing. Such tests can be run "locally" at build time. No need for servers, file systems, networking, ...
Keeping that in mind, I think there are two things you can work with:
You can use JUnit to drive "integration" or "functional tests". Meaning: you can define test suites that do the "full thing" - define batches, have them processed to check for expected results in the end. As said, that would be integration tests that make sure the end-to-end flow works as expected.
You look into"normal" JUnit unit-testing. Meaning: you focus on those aspects in your code that are "un-related" to the batch framework (in other words: look out for POJOs) and unit-test those. Locally; maybe with mocking frameworks; without relying on a real batch service running your code.
Building on the answer from #GhostCat, it seems you're asking how to drive the full job (his bullet 1.) in your tests. (Of course unit testing the reader/processor/writer components individually can also be useful.)
Your basic options are:
Use Arquillian (see here for a link on getting started with Arquillian and Liberty) to run your tests in the server but to let Arquillian handle the tasks of deploying the app to the server and collecting the results.
Write your own servlet harness driving your job through the JobOperator interface. See the answer by #aguibert to this question for a starting point. Note you'll probably want to write your own simple routine polling the JobExecution for one of the "finished" states (COMPLETED, FAILED, or STOPPED) unless your jobs have some other means of making the submitter aware.
Another technique to keep in mind is the startup bean. You can run your jobs simply by starting the server with a startup bean like:
#Startup
#Singleton
public class StartupBean {
JobOperator jobOp = BatchRuntime.getJobOperator();
// Drive job(s) on startup.
jobOp.start(...);
This can be useful if you have a way to check the job results separate from using the JobOperator interface (for which you need to be in the server). Your tests can simply poll and check for the job results. You don't even have to open an HTTP port, and the server startup overhead is only a few seconds.

How to run Junit tests in parallel while using page object model?

Please share some valuable information. I have not seen any document or standard reference on the internet that explains in detail about this. Even if you have TestNG related information (With Page Object Model), I will appreciate that.
Until you are using static variable as driver object or page objects you can run your test scripts in parallel irrespective of whatever unit-test framework you are using.
For Junit3:
You have to use build tools to run it in parallel. You can only run it multi-threaded based on classes and not in methods
For Junit4:
How to make JUnit test cases execute in parallel?
In TestNG: you have thread option in testng.xml itself.