Currently I need to test a complex query which join multiple tables with lots of filtering such as the object is deleted, suspended, private and the owner of the object is suspended or not etc.
Should I prepare fixtures for every combination of the scenarios and test the query output correct result for each fixtures.
Or should I break the testing into smaller parts which each part tests a specific filtering rules like an unit test for testing correct filtering base on object status and another unit test for testing correct filtering base on owner status. And combining the two unit tests the whole query is tested.
Any one have any suggestion? Or is there better way to refactor such query or methods to be more easy to test.
Related
My feature has 4-5 scenarios, all are dependent on fact that certain element is present on page or not. For example username, password field.
How can I skip/ignore entire feature based on this condition.Some like below that I have written for skipping a scenario.
public void skipScenario(String message, Scenario scenario){
if (username==null){
Assume.assumeTrue(false);
}
}
Just like Scenario interface , we have Feature class with several implementations, I don't know which one can has the function to skip it.
From the Cucumber docs: "Each scenario should test one thing and fail for one particular reason. This means there should be no reason to skip steps.
If there does seem to be a reason you’d want to skip steps conditionally, you probably have an anti-pattern. For instance, you might be trying to test multiple things in one scenario or you might not be in control of the state of your test environment or test data.
The best thing to do here is to fix the root cause."
I have tables table_a and table_b in my database and they are mapped in slick with TableQuery Objects. I need to copy a restricted set of data from table_a to table_b.
Let the table query objects be tableQueryA and tableQueryB. The logic for filtering and copying data is complex. So
I think of doing scala collection equivalent of table query object in a for yield and treat them as normal collections. But Everything happens in one transaction. The code looks something like this.
for {
collA <- tableQueryA.filter(.....something....).result
collB <- tableQueryB.filter(.....somethingElse.....).result
...... do something with collA and collB
}
yield ...something
Is there a harm doing this way, i.e handling as scala collections and processing them?
I am using slick 3.2
By doing two separate tableQueryX.filter().result, you'll be executing two separate queries to the database. You could replace it with one query that joins two tables.
It's hard to say what is the better approach in term of performance as it depends on amount of filter or where clauses and what kind of indexes are used by the database to fulfill those. If you need a top notch performance, try both approaches and pick one that is the fastest.
If both of your queries yield big amount of data, you need to consider memory usage for your application too because all data is loaded before scala collection api can be used.
I dont see any harm as long as data is less - but better to filter out data at DB level to avoid any potential out of memory errors.
TLDR; How do I SELECT on a Mysql table with Node.js without slurping the results? (If you say SELECT COUNT(id) then use OFFSET and LIMIT, no, that's silly. I want to HOLD the query result object and poke it after, not hammer the database to death re-SELECTing every few seconds. Some of these queries will take several seconds to run!)
What: I want to run a Mysql query on a nodejs service and basically attach the MYSQL_RES to the client's session context.
Why: The queries in question may yield tens of thousands of results. I'll only have 1 demanding client at a time, and the UI will only display ~30 results at a time in a scrollable/flickable list view.
How: In Qt it's standard practice to use a QSqlModel for circumstances such as this. Basically if you have a list (view) with this type of model attached, it will only "read" the results pertinent to the visible area. As the view is scrolled down it populates it with more results.
But: The NodeJS asynchronous method is lovely, but I have yet to find a way to get ONLY the result object (sans rows), and a respective method of "picking out" result rows or a span thereof arbitrarily.
I want to compare the tests results of two jobs in jenkins. in my case these jobs are not consecutive so the usual tests resutls view of the job is not good enough.
Is there any way to get this view? or is it possible write such plugin myself?
I have a similar, although not exactly the same setup. In your circumstances what I do would work like this: job A stores its test results (say, junit xml's) keyed by its build id and kicks off job C via the Parameterized Trigger Plugin passing to it the location of the test results. Job C then can either simply publish those tests or do some additional processing on them. Job B does the same thing as job A as far as its tests and kicking off of job C are concerned. Then all of your results are aggregated in job C.
The additional processing that job C does may include storing the results of test A in a temporary location and then processing them later together with the results of job B. This is not automatic, but still much easier than writing a whole new plugin. Also you can customize it in any way you want.
You can do it visually using Test Results Analyzer Plugin. Under 'Test Results Analyzer' > Options you can select more than the last 10 builds or all builds. You can expand Traffic-light Red Green Yellow table for each test result so can see trends in test history as well as checking results on individual jobs.
I understand that there are #Before and #BeforeClass, which are used to define fixtures for the #Test's. But what should I use if I need different fixtures for each #Test?
Should I define the fixture in the
#Test?
Should I create a test class
for each #Test?
I am asking for the best practices here, since both solutions aren't clean in my opinion. With the first solution, I would test the initialization code. And with the second solution I would break the "one test class for each class" pattern.
Tips:
Forget the one test class per class pattern, it has little merit. Switch to one test class per usage perspective. In one perspective you might have multiple cases: upper boundary, lower boundary, etc. Create different #Tests for those in the same class.
Remember that JUnit will create an instance of the test class for each #Test, so each test will get a distinct fixture (set up by the same #Before methods). If you need a dissimilar fixture you need a different test class, because you are in a different perspective (see 1.)
There is nothing wrong with tweaking the fixture for a particular test, but you should try to keep the test clean so it tells a story. This story should be particularly clear when the test fails, hence the different, well named #Test for each case (see 1.)
I would suggest to create a separate Class based on the different fixtures you need. If you have two different fixtures you need just create two different classes (give them a convenient name). But i would think a second time about that, in particular about the difference in the fixtures and why are the different. May be you are on the way to a kind of integration test instead of unit test?
If you are positive that your fixture is unique to single test then it belongs to #Test method. This is not typical though. It could be that some part of it is unique or you didn't parametrize/extracted it right, but typically you will share a lot of the same data between tests.
Ultimately fixture is part of the test. Placing fixture in #Before was adopted as xUnit pattern because tests always:
prepare test data/mocks
perform operations with SUT
validate/assert state/behavior
destroy test data/mocks
and steps 1 (#Before) and 4 (#After) are reused a lot (at least partially) in related tests. Since xUnit is very serious about test independence it offers fixture methods to guarantee that they always run and test data created/destroyed properly.