JUnit, IntegrationTest and Test Execution Order - junit

Junit is a framework for unit testing but it is also used for integration testing. The behaviour required for unit testing and integration tests are different. Unit tests are supposed to be isolated from other tests. But integration tests require interaction between consecutive tests. In my case sceanario is as follows. Each line requires the object from the previous execution. To achieve this behaviour there should be an execution order of tests. There is an #Order annotation but this annotation orders the methods of a single class. But in my case it is not proper to keep these test in a single class.
Create a customer
Create a hierarchical billing account for this customer
Create a contract for the customer
Sell some hierarchical item for customer
Sell some non-hierarchical item for customer
A proposal may be to make these operations for each test. For example to create a hierarchical billing account, create a customer first in the test. This scheme is not an optimum solution, has complexities too.
Would increase required run time.
Would increase complexity for a single test.
So is there a way for me to order my tests in Junit that I am missing? If not why it is not? Should I use another tool?

Related

Detect data anomalies in data pipe and trigger scheduled datapipeline

In Foundry, we have a data pipeline where we want to insert a code node (repo or workbook) that detects anomalies and then sends and email or some other alert about the problem.
Having trouble finding this in the documentation, can someone point me to it?
Ideally we would love to have the code trigger the Scheduler to do a pipeline run to create a REPORT, (maybe even Quiver, to do some timeline analysis). Is this possible? Are there examples in the documentation?
Check out the documentation in the Data Health section of the platform documentation. There are a number of patterns possible, including defining data expectations in your code.
Whether defined as expectations or dataset health checks, failures can be set up to create Issues within the platform, which can have default assignees (individuals or groups) that will also send notifications, which are both in platform and over email (depending on per-user configuration).
Health check failures will also automatically populate the data health tab in the Project Catalog view, which can serve as a dashboard to view the overall health of the project. You can also surface these in the Data Lineage view with a coloring based on Data Health to understand issues across the breadth of the pipeline.
For a comprehensive approach to pipeline health, review the Pipelines and best practices section in the Code Repositories documentation.

UI(regression) Test Naming Convention Recommendation

I am looking for recommendations on naming conventions for UI tests. I am using Given-When-Then pattern for naming unit tests. This works well for unit tests because I am using unit test to test a single thing.
However when it comes to UI (regression) tests this is not working very well. I write a UI test to test a particular user flow that may have many steps. I will have a test for a happy path and one or more unhappy path. I am looking for some recommendations on regression tests naming
For example, I have a regression test that goes through the checkout process for an ecommerce website. User will add items to cart, go through the checkout process by entering billing shipping info, credit card information, order review and confirmation.
What are recommendations for when users is able to place an order, when user enters invalid credit card number and get card rejected

How do I see the history of my Jenkins build test results?

I've got a collection of Jenkins jobs which are all essentially tests packs - running lots of JUnit tests.
I keep the results for 7 days and, with the aid of the global build stats plugin and build metrics plugin, I can get a percentage of the number of builds (test packs) that had at least one failure in the last week.
What I'm now interested in doing is getting the percentage of all test failures over one week, to get a better idea as to how badly the set of builds failed - was it just one test that caused each build to fail? Or all the tests?. Is it possible with an existing plugin?
I know the data is there because the home page of any of my jobs has a graph on the right where the green area represents test passes and red fails, for all of the previous builds. This gives me some idea, but I'd like a figure to report with.
You may want to take a look at the Unit Test History Generator or Test Results Analyzer plugins.

How to ensure quality of junit tests?

Are there proven ways of verifying quality of junit tests or integration tests?
Should your business analyst review unit tests to cerfity? Or are there any other ways?
In the traditional code first environment a peer or lead would review the test plan but how about automated tests?
I looked at this stackflow thread but couldn't extract anything meaningful stuff.
Thoughts?
Mutation testing and code coverage can verify the quality of your tests.
So first check than your code coverage is high enough. After this verify with mutation testing than your test are good. Mutation testing tool makes small change(s) in production code and reruns test - after a modification a good test should fail. For mutation testing tool in Java look at PIT Mutation Testing and this blog post: Introduction to mutation testing with PIT and TestNG
But this is still not enough, tests should be good written and readable. So you need code review also and quality rules verification for tests. I recommend nice book about writing good tests Practical Unit Testing. Chapter 10: Maintainable tests from this book is available for free.
Here's a nice linked article:
http://www.ibm.com/developerworks/java/library/j-cq01316/index.html?ca=drs
And:
Good Tests ⇒ High Coverage
High Coverage ⇒/⇒ Good Tests
Coverage tools are useful to identify what areas of your project need more attention, but it doesn't mean that areas with good coverage shouldn't need more attention.
Code coverage tool is a good start, but knowing that a given line was executed does not mean it was tested. Infamous test cases without assertions or expected=Exception.class are an example.
I can imagine few criteria on this level:
if the line is tested, any change to it (inverting condition, removing...) should fail at least one test
given piece of logic should be fully reconstructible based only its tests
the test does not mirror the production code
the test should not be dependent on current date, locale, timezone, order of other tests
One might try to automate the first one, others are more or less subjective.
As for analyst doing test review - probably only Fitensse fixtures are readable enough to satisfy non-developers.
Code review is the best way to ensure test quality. I would not have business analysts review the tests, for the simple fact that they might not have the training necessary to understand the tests. Also, unit tests do not all live at the functional level, where analysts' requirements are. An analyst might say 'when the user clicks save, the profile is saved' whereas you might have to write n number of tests across multiple layers to get that functionality.
You might consider code coverage tools to ensure 100% of the code lines are being tested. Emma is a good tool for java (http://emma.sourceforge.net/).

Design By Contract and Test-Driven Development [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm working on improving our group's development process, and I'm considering how best to implement Design By Contract with Test-Driven Development. It seems the two techniques have a lot of overlap, and I was wondering if anyone had some insight on the following (related) questions:
Isn't it against the DRY principle to have TDD and DbC unless you're using some kind of code generator to generate the unit tests based on contracts? Otherwise, you have to maintain the contract in two places (the test and the contract itself), or am I missing something?
To what extent does TDD make DbC redundant? If I write tests well enough, aren't they equivalent to writing a contract? Do I only get added benefit if I enforce the contract at run time as well as through the tests?
Is it significantly easier/more flexible to only use TDD rather than TDD with DbC?
The main point of these questions is this more general question: If we're already doing TDD properly, will we get a significant benefit for the overhead if we also use DbC?
A couple of details, though I think the question is largely language-agnostic:
Our team is very small, <10 programmers.
We mostly use Perl.
Note the differences.
Design driven by contract. Contract Driven Design.
Develop driven by test. Test Driven Development.
They are related in that one precedes the other. They describe software at different levels of abstraction.
Do you discard the design when you go to implementation? Do you consider that a design document is a violation of DRY? Do you maintain the contract and the code separately?
Software is one implementation of the contract. Tests are another. User's manual is a third. Operations guide is a fourth. Database backup/restore procedures are one part of an implementation of the contract.
I can't see any overhead from Design by Contract.
If you're already doing design, then you change the format from too many words to just the right words to outline the contractual relationship.
If you're not doing design, then writing a contract will eliminate problems, reducing cost and complexity.
I can't see any loss of flexibility.
start with a contract,
then
a. write tests and
b. write code.
See how the two development activities are essentially intertwined and both come from the contract.
I think there is overlap between DbC and TDD, however, I don't think there is duplicated work: introducing DbC will probably result in a reduction of test cases.
Let me explain.
In TDD, tests aren't really tests. They are behavioral specifications. However, they are also design tools: by writing the test first, you use the external API of your object under test – that you haven't actually written yet – in the same way that a user would. That way, you design the API in a way that makes sense to a user, and not in the way that makes it easiest for you to implement. Something like queue.full? instead of queue.num_entries == queue.size.
This second part cannot be replaced by Contracts.
The first part can be partially replaced by contracts, at least for unit tests. TDD tests serve as specifications of behavior, both to other developers (unit tests) and domain experts (acceptance tests). Contracts also specify behavior, to other developers, to domain experts, but also to the compiler and the runtime library.
But contracts have fixed granularity: you have method pre- and postconditions, object invariants, module contracts and so on. Maybe loop variants and invariants. Unit tests however, test units of behavior. Those might be smaller than a method or consist of multiple methods. That's not something you can do with contracts. And for the "big picture" you still need integration tests, functional tests and acceptance tests.
And there is another important part of TDD that DbC doesn't cover: the middle D. In TDD, tests drive your development process: you never write a single line of implementation code unless you have a failing test, you never write a single line of test code unless your tests all pass, you only write the minimal amount of implementation code to make the tests pass, you only write the minimal amount of test code to produce a failing test.
In conclusion: use tests to design the "flow", the "feel" of the API. Use contracts to design the, well, contract of the API. Use tests to provide the "rhythm" for the development process.
Something like this:
Write an acceptance test for a feature
Write a unit test for a unit that implements a part of that feature
Using the method signature you designed in step 2, write the method prototype
Add the postcondition
Add the precondition
Implement the method body
If the acceptance test passes, goto 1, otherwise goto 2
If you want to know what Bertrand Meyer, the inventor of Design by Contract, thinks about combining TDD and DbC, there is a nice paper by his group, called Contract-Driven Design = Test-Driven Development - Writing Test Cases. The basic premise is that contracts provide an abstract representation of all possible cases, whereas test cases only test specific cases. Therefore, a suitable test harness can be automatically generated from the contracts.
I would add:
the API is the contract for the programmers, the UI definition is the contract with the clients, the protocol is the contract for client-server interactions. Get those first, then you can take advantage of parallel development tracks and not get lost in the weeds. Yes, periodically review to make sure requirements are met, but never start a new track without the contract. And 'contract' is a strong word: once deployed, it must never change. You should include versioning management and introspection from the get-go, changes to the contract are only implemented by extension sets, version numbers change with these, and then you can do things like graceful degradation when dealing with mixed or old installations.
I learned this lesson the hard way, with a large project that wandered off into never-never land, then applied it the right way later when seriously under the gun, company-survival, short fuse timeline. We defined the protocol, defined and wrote a set of protocol emulations for each side of the transactions (basically canned message generators and received message checker, one evenings' worth of two-brained coding), then parted to separately write the server and client ends of the app. We recombined the night of the show, and it just worked. Requirements, design, contract, test, code, integrate. In that order. Repeat until baked.
I am a little leery of design by TLA. As with Patterns, buzz-word compliant recipes are a good guide, but it is my experience that there is no such thing as a one-size-fits-all design or project management procedure. If you are doing things precisely By The Book (tm) then, unless it is a DOD contract with DOD procedural requirements, you will probably get into trouble somewhere along the way. Read the Book(s), yes, but be sure tounderstand them, and then take into account also the people side of your team. Rules that are only enforced by the Book will not get enforced uniformly - even when tool-enforced there can be drop-outs (e.g. svn comments left empty or cryptically brief). Procedures only tend to get followed when the tool chain not only enforces them but makes following easier than any possible short-cuts. Believe me, when the going gets tough, the short-cuts get found, and you may not know about the ones that got used at 3am until it is too late.
You can also use executable acceptance tests that are written in the domain language of the contract. It might not be the actual "contract", but half way between unit tests and the contract.
I would recomment using Ruby Cucumber
http://github.com/aslakhellesoy/cucumber
But since you are a Perl shop, then maybe you can use my own small attempt at p5-cucumber.
http://github.com/kesor/p5-cucumber
Microsoft has done work on automatic generation of unit tests, based on code contracts and parameterized unit test. E.g. the contract says the count must be increased by one when an item is added to a collection, and the parameterized unit test say how to add “n” items to a collection. Pex will then try to create a unit test that proves the contract is broken. See this video for a overview.
If this works, your unit test will only have to be written for one example of each thing you are trying to test, and PEX will be able to then work out witch data items will break the test.
I had some ruminations about that topic some time ago.
You may want to take a look at
http://gleichmann.wordpress.com/2007/12/09/test-driven-development-and-design-by-contract-friend-or-foe/
When you are using TDD to implement a new method, you need some input: you need to know the assertions to check in your tests. Design-by-contract gives you those assertions: they are the post-conditions and invariants of the method.
I have found DbC very handy for jumpstarting the red-green-refactor cycle because it helps in identifying unit tests to start with. With DbC I start thinking about pre-conditions that the object being TDD-ed must handle and each pre-condition might represent a failing unit test to start a red-green-refactor cycle. At some point I switch to start the cycle with a failing unit test for a post-condition, and then just keep the TDD flow going. I have tried this approach with newcomers to TDD and it really works in kickstarting the TDD mindset.
In summary, think of DbC as an effective way to identify key behavioral unit tests. DbC helps at analyzing inputs (pre-conditions) and outputs(post-conditions), which are the two things that we need to control (inputs) and observe (outputs) to write testable software (a similar aim of TDD).