Multiple Junit Test Cases at a time - junit

I want to hit a few Junit Test Cases at a time using the Junit Sampler in Jmeter and want to get the result of each of individually.
Eg:
JunitTestCase1
JunitTestCase2
JunitTestCase3
Want to hit all of them parallel and want to get their results(load time) individually.

To execute several requests at exactly the same moment - Synchronizing Timer
Get individual results - use JMeter Listener of your choice, the most informative (and the most resource consuming as well) is View Results Tree

You can add a separate thread group for each test case, then add a JUnit Sampler for each thread group. You will be able to select a different test in the JUnit Sampler dropdowns for each one. It should look a bit like this.
To get their load times individually you can add a Results Tree or Response Times over Time Listener to each thread group. Bear in mind this will be the total time including opening the browser (if you are opening a browser). If you don't want that included in the times you will need to add some sort of logging to your test code to collect the metrics.

Related

Is there a way to implement true multi-threading within a puppeteer application?

This may be a really silly question. I admit I'm a bit naive to the chrome web engine, and JS v8 engine's capabilities.
But say, I'm running a puppeteer application which is scraping URL's from tags, and pushing them to an array called img2arr
Then, I have a local file var img1 = ./image.jpg (in quotes).
Last, I have a function compare(img1, img2arr) which takes in both of these as arguments, and using a library such as blink-diff, or Jimp, analyzes and compares img1 against each image in img2arr. This all happens in a .forEach() or .map loop which works, but can be slow as the image2arr grows.
Say it contains 500 image URL's - is there a way to use service workers, a specific Node.js library, or anything to ensure that my image looping and comparing logic all happen across multiple threads?
For instance 200 loops, comparing two 12KB images takes 7 seconds, but with my blazing fast 12 cores processor couldn't it take less than 1?
Sorry for my obvious naivety!
There is a few possible ways:
1. Use spawn to run separately your script on every core of your machine. Here is really nice article.
2. Use node.js worker threads here is you can find how it works with examples.
Not depending the way of implementation you decide, main part would be to collect all data, up to this point
Last, I have a function compare(img1, img2arr) which takes ....
Then, separate your array of img2arr to chunks. You can choose any method from this article
After separation, pass each chunk to separate process and wait for first process to find the similar image.
When the process finds out similar image, you can kill all other processes from your master process.
So, the full process would be:
1. Collect images to compare
2. Split images to compare into chunks.
3. Separate comparison logic to separate file and run a thread/process on this file.
3a. Send chunks to available process
3b. Wait for first success return from process and kill all other ones.

Detecting JUnit "tests" that never assert anything

We used to have a technical director who liked to contribute code and was also very enthusiastic about adding unit tests. Unfortunately his preferred style of test was to produce some output to screen and visually check the result.
Given that we have a large bank of tests, are there any tools or techniques I could use to identify the tests never assert?
Since that's a one time operation I would:
scan all test methods (easy, get the jUnit report XML)
use an IDE or other to search references to Assert.*, export result as a list of method
awk/perl/excel the results to find mismatches
Edit: another option is to just look for references to System.out or whatever his preferred way to output stuff was, most tests won't have that.
Not sure of a tool, but the thought that comes to mind is two-fold.
Create a TestRule class that keeps track of the number of asserts per test (use static counter, clear counter at beginning of test, assert that it is not 0 at end of test).
Wrap the Assert class in your own proxy that increments the TestRule's counter each time it is called.
Is your Assert class is called Assert that you would only need to update the imports and add the Rule to the tests. The above described mechanism is not thread-safe so if you have multiple tests running concurrently you will be incorrect results.
If those tests are the only ones that produce output, an automated bulk replacement of System.out.println( with org.junit.Assert.fail("Fix test: " + would highlight exactly those tests that aren't pulling their weight. This technique would make it easy to inspect those tests in an IDE after a run and decide whether to fix or delete them; it also gives a clear indication of progress.

How can I check for downstream components in an SSIS custom transform?

I am working on a custom SSIS component that has 4 asynchronous outputs. It works just fine but now I have a user request for an enhancement and I am not sure how to handle it. They want to use the component in another context where only 2 of the 4 outputs will be well defined. I foolishly said that this would be trivial for me to support, I planned to just look to see if the two "undefined" streams were even connected, if not then I would skip over that part of the processing.
My problem is that I cannot figure out if an output is connected at run time, I had hoped that the output pipeline or output buffer would be missing. It doesn't look like that is the case; even when they are not hooked up the output and buffer are present.
Does anyone know where I should be looking to see if an output has a downstream consumer or not?
Thanks!
Edit: I was never able to figure out how to do this reliably, so I ended up making this behaviour configurable by the user. It is not automatic like I would have hoped but the difference I found between the BIDS environment and the DTExec environment pushed me to the conclusion that a component probably should not be making assumptions about the component graph it is embedded in.

Drools testing with junit

What is the best practice to test drools rules with junit?
Until now we used junit with dbunit to test rules. We had sample data that was put to hsqldb. We had couple of rule packages and by the end of the project it is very hard to make a good test input to test certain rules and not fire others.
So the exact question is that how can I limit tests in junit to one or more certain rule(s) for testing?
Personally I use unit tests to test isolated rules. I don't think there is anything too wrong with it, as long as you don't fall into a false sense of security that your knowledge base is working because isolated rules are working. Testing the entire knowledge base is more important.
You can write the isolating tests with AgendaFilter and StatelessSession
StatelessSession session = ruleBase.newStatelessSesssion();
session.setAgendaFilter( new RuleNameMatches("<regexp to your rule name here>") );
List data = new ArrayList();
... // create your test data here (probably built from some external file)
StatelessSessionResult result == session.executeWithResults( data );
// check your results here.
Code source: http://blog.athico.com/2007/07/my-rules-dont-work-as-expected-what-can.html
Do not attempt to limit rule execution to a single rule for a test. Unlike OO classes, single rules are not independent of other rules, so it does not make sense to test a rule in isolation in the same way that you would test a single class using a unit test. In other words, to test a single rule, test that it has the right effect in combination with the other rules.
Instead, run tests with a small amount of data on all of your rules, i.e. with a minimal number of facts in the rule session, and test the results and perhaps that a particular rule was fired. The result is not actually that much different from what you have in mind, because a minimal set of test data might only activate one or two rules.
As for the sample data, I prefer to use static data and define minimal test data for each test. There are various ways of doing this, but programmatically creating fact objects in Java might be good enough.
I created simple library that helps to write unit tests for Drools. One of the features is exactly what you need: declare particular drl files you want to use for your unit test:
#RunWith(DroolsJUnitRunner.class)
#DroolsFiles(value = "helloworld.drl", location = "/drl/")
public class AppTest {
#DroolsSession
StatefulSession session;
#Test
public void should_set_discount() {
Purchase purchase = new Purchase(new Customer(17));
session.insert(purchase);
session.fireAllRules();
assertTrue(purchase.getTicket().hasDiscount());
}
}
For more details have a look on blog post: https://web.archive.org/web/20140612080518/http://maciejwalkowiak.pl/blog/2013/11/24/jboss-drools-unit-testing-with-junit-drools/
A unit test with DBUnit doesn't really work. An integration test with DBUnit does. Here's why:
- A unit test should be fast.
-- A DBUnit database restore is slow. Takes 30 seconds easily.
-- A real-world application has many not null columns. So data, isolated for a single feature, still easily uses half the tables of the database.
- A unit test should be isolated.
-- Restoring the dbunit database for every test to keep them isolated has drawbacks:
--- Running all tests takes hours (especially as the application grows), so no one runs them, so they constantly break, so they are disabled, so there is no testing, so you application is full of bugs.
--- Creating half a database for every unit test is a lot of creation work, a lot of maintenance work, can easily become invalid (with regards to validation which database schema's don't support, see Hibernate Validator) and ussually does a bad job of representing reality.
Instead, write integration tests with DBunit:
- One DBunit, the same for all tests. Load it only once (even if you run 500 tests).
-- Wrap each test in a transaction and rollback the database after every test. Most methods use propagation required anyway. Set the testdata dirty only (to reset it in the next test if there is a next test) only when propagation is requires_new.
- Fill that database with corner cases. Don't add more common cases than are strictly needed to test your business rules, so ussually only 2 common cases (to be able to test "one to many").
- Write future-proof tests:
-- Don't test the number of activated rules or the number of inserted facts.
-- Instead, test if a certain inserted fact is present in the result. Filter the result on a certain property set to X (different from the common value of that property) and test the number of inserted facts with that property set to X.
Unit test is about taking minimum piece of code and test all possible usecases defining specification. With integration tests your goal is not all possible usecases but integration of several units that work together. Do the same with rules. Segregate rules by business meaning and purpose. Simplest 'unit under the test' could be file with single or high cohension set of rules and what is required for it to work (if any), like common dsl definition file and decision table. For integration test you could take meaningful subset or all rules of the system.
With this approach you'll have many isolated unit tests and few integration tests with limited amount of common input data to reproduce and test common scenarios. Adding new rules will not impact most of unit tests but few integration tests and will reflect how new rules impact common data flow.
Consider JUnit testing library that could be suitable for this approach

Hudson - save artifacts only when less than 90% passes

I am new at this and I was wondering how I can setup that I save the artifacts, only if less than 90% of the tests have passed.
Any idea how I can do this?
thanks
This is not currently possible with Hudson. What is the motivation to avoid archiving artifacts on every build?
How about a rather simple workaround. You create a post build step (or additional build step) that calls your tests from the command line. Be sure to capture all errors so Hudson don't count it as a failure. Than you evaluate your condition and set the error level accordingly. In addition you need to save reports (probably outside hudson) before you set the error level, so they are available even or only when the build fails.
My assumption here is, that it is OK, not to run the tests when building the app fails. However, you can separate the building and testing in two jobs. See here.