testlink integration with robotframework - integration

I am looking for testlink integration with robotframework but no success so far.
Problem I have is how to write tests in robotframework and how to link with testlink. I mean what is the format of writing tests in testlink so that robotframework will understand and execute it and how it will be achived.
I have looked at https://code.google.com/p/robotframework-tmlibrary/ but no luck in how to use this properly.
Any help is appreciated.

Use this,
https://github.com/dmizverev/robot-framework-library/blob/master/listener/TestLinkListener.py
or
https://github.com/hayzer/robotframework2testlink
These two implementation are using the Robot Listener and using Testlink APIs. The method end_test() does the job to update the result.
If you don't have test suite, testcase, or test plan created before running the robot test execution. It can be automated. Implement start_suite() method and use test link API to create the suite and use start_test() method to create testcase and associate to test plan.
While running robot from cli append with --listener classname
Hope this helps.

There is another approach to this.
If you have tons of manual tests in Testlink and automate them independently with robotframework. Or you have two legacy sets of tests(manual and automated) that you are trying to bring together.
You can use jenkins to integrate robot and testlink. You'll have to have same testcase names in testlink and robot suites. Also you'll have to define additional custom keyword in testlink for automated tests that duplicates testcase name. There are both Testlink jenkins plugin and Robotframework jenkins plugin available for this. Also there are some useful forks that support test platforms from testlink or custom testplan fields.
Basically, you'll have to obtain TC names from a testplan in jenkins job and feed robot with those in any way (e.g. -t "$TCNAME") After that, execution results can be passed back to TL.
This approach lets you have independent test structure in robot and testlink and better integrate with other processes during development (pipelines in CI/CD, test planning for huge test suites), but you'll have to do some additional job of tracking two sets of tests.

Related

Add Test Steps in JUNIT to get better reporting

I am running UI testing with Jest and I am using a custom reporter to generate a JUNIT.xml file at the end of the run https://github.com/jest-community/jest-junit , so that my azure pipeline can read it and generate nice analytics. My test framework is organize around Test suite that represent a big functionality, then each aspect of that functionality is check within a test contain in the suite ( That check might require multiple steps ) and I would like show each of those steps. This way i think it would be more readable for anyone looking at the report and it would be very easy to get context on why a test failed.
I try to put assertion at each steps. But JUNIT only record the assertion that failed.
I also try to change the way my test are organize and make a step a test itself. But, in Jest, and it seem in a lot of other runner as well ( at least in Node ) it seem that it's not possible to guarantee easily that test are run in a specific sequence. Also, it's really verbose to code suite like this.
Does anybody have an idea on how I could achieve this granularity ?
Thank you.

Which kind of test should I use for a library?

I'm developing a PHP library that I'd like to use in different projects. The library uses a REST-like service in the background. I don't want to write tests for the service API, but for the library.
Would I need to write unit tests? Or functional tests? Since it is a library I won't write acceptance test - I hope this is correct.
I don't know if this is important for the issue, but the library needs to login into the service API and uses an API-key for the next operations. Also, when the library gets tested, the operations before are important. It is a designer tool and I have operations like 'move rectangle', 'rotate rectangle' and so on and I would like to test several operations in a sequence that should bring a certain result.
I think that this is a kind of functional test. Or do I need both? Can unit tests work with a service in the background?

JSR:352 Unit testing Java Batch Code?

Can we use JUnit to test java batch jobs? Since Junit runs locally and java batch jobs run on the server, i am not sure how to start a job (i tried using using the JobOperator class) from JUnit test cases.
If JUnit is not the right tool, how can we unit test java batch code.
I am using using IBM's implementation of JSR 352 running on WAS Liberty
JUnit is first of all an automation and test monitor framework. Meaning: you can use it to drive all kinds of #Test methods.
From an conceptual point, the definition of unit tests is pretty vague; if you follow wikipedia, "everything you do to test something" can be seen as unit test. Following that perspective, of course, you can "unit test" batch code that runs on a batch framework.
But: most people think that "true", "helpful" unit tests do not require the presence of any external thing. Such tests can be run "locally" at build time. No need for servers, file systems, networking, ...
Keeping that in mind, I think there are two things you can work with:
You can use JUnit to drive "integration" or "functional tests". Meaning: you can define test suites that do the "full thing" - define batches, have them processed to check for expected results in the end. As said, that would be integration tests that make sure the end-to-end flow works as expected.
You look into"normal" JUnit unit-testing. Meaning: you focus on those aspects in your code that are "un-related" to the batch framework (in other words: look out for POJOs) and unit-test those. Locally; maybe with mocking frameworks; without relying on a real batch service running your code.
Building on the answer from #GhostCat, it seems you're asking how to drive the full job (his bullet 1.) in your tests. (Of course unit testing the reader/processor/writer components individually can also be useful.)
Your basic options are:
Use Arquillian (see here for a link on getting started with Arquillian and Liberty) to run your tests in the server but to let Arquillian handle the tasks of deploying the app to the server and collecting the results.
Write your own servlet harness driving your job through the JobOperator interface. See the answer by #aguibert to this question for a starting point. Note you'll probably want to write your own simple routine polling the JobExecution for one of the "finished" states (COMPLETED, FAILED, or STOPPED) unless your jobs have some other means of making the submitter aware.
Another technique to keep in mind is the startup bean. You can run your jobs simply by starting the server with a startup bean like:
#Startup
#Singleton
public class StartupBean {
JobOperator jobOp = BatchRuntime.getJobOperator();
// Drive job(s) on startup.
jobOp.start(...);
This can be useful if you have a way to check the job results separate from using the JobOperator interface (for which you need to be in the server). Your tests can simply poll and check for the job results. You don't even have to open an HTTP port, and the server startup overhead is only a few seconds.

Stopping FlexUnit test run, if a test fails?

I use FlexUnit 4.1 with Adobe's TestRunnerBase to run a suite of integration tests to verify the integrity of a 3-tier BlazeDS/Java EE/MySQL server.
To bypass the security checks enforced by Apache Shiro while running those tests, I have configured two separate test runs: One that logs in as root, one that performs the actual integration tests.
Because of the way that BlazeDS handles duplicate sessions (this is an issue for another question, or rather, it has been already), sometimes the login mechanism fails - in which case I would like the TestRunner to suspend all further activities.
I have looked all over for some way to configure FlexUnitCore to stop on a test failure, but to no avail. Also, there seem to be events only for TEST_START and TEST_COMPLETE, but not for TEST_FAIL.
Is there some other way to find out if a test failed, to stop the runner?
First time for me - I stumbled upon the solution to my problem while I was writing my question: There is an IRunListener interface that can be implemented to react to all sorts of information sent by the TestRunner. Then we simply use FlexUnitCore#addListener() to initialize it, the same way we do it with the UIListener, TraceListener, CIListener, etc. that Adobe provides.

"smart" JUnit test ordering

I want to add some hints to my build, to run certain tests "first" without re-running them later.
Simply add Class names to a "priority" string in an input parameter to my test task, or
Have JUnit's testers smart enough to remember/persist failing test class names, so that the next time around the builder runs those first.
What is the most idiomatic way of doing this in Ant?
The following tools might help you to achieve the desired JUnit test execution order, but they depend on Eclipse usage:
Continuous Testing for Eclipse (CT-Eclipse)
JUnit Max
infinitest
I have not used any of those tools, and I have no Ant-only solution.
You might consider these related posts:
Run JUnit automatically when building Eclipse project
Starting unit tests automatically after saving a file