I have test cases defined in an Excel sheet. I am reading a string from this sheet (my expected result) and comparing it to a result I read from a database (my actual result). I then use AssertEquals(expectedResult, actualResult) which prints any errors to a log file (i'm using log4j), e.g. I get java.lang.AssertionError: Different output expected:<10> but was:<7> as a result.
I now need to write that result into the Excel sheet (the one that defines the test cases). If only AssertEquals returned String, with the AssertionError text that would be great, as I could just write that immediately to my Excel sheet. Since it returns void though I got stuck.
Is there a way I could read the AssertionError without parsing the log file?
Thanks.
I think you're using junit incorrectly here. THis is why
assertEquals not AssertEquals ( ;) )
you shouldnt need to log. You should just let the assertions do their job. If it's all green then you're good and you dont need to check a log. If you get blue or red (eclipse colours :)) then you have problems to look at. Blue is failure which means that your assertions are wrong. For example you get 7 but expect 10. Red means error. You have a null pointer or some other exception that is throwing while you are running
You should need to read from an excel file or databse for the unit tests. If you really need to coordinate with other systems then you should try and stub or mock them. With the unit test you should work on trying to testing the method in code
if you are bootstrapping on JUnit to try and compare an excel sheet and database then I would ust export the table in excel as well and then just do a comparison in excel between columns
Reading from/writing to files is not really what tests should be doing. The input for the tests should be defined in the test, not in the external file which can change - this can either introduce false negatives or even worse false positives (making your tests effectively useless while also giving false confidence that everything is ok because tests are green).
Given your comment (a loop with 10k different parameters coming from file), I would recommend converting this excel file into JUnit Parameterized test. You may want to put the array definition in another class, because 10k lines is quite a lot.
If it is some corporate bureaucracy, and you need to have this excel file, then it makes sense to not write a classic "test". I would recommend just a main method that does the job - reads the file, runs the code, checks the output using simple if (output.equals(expected)) and then writes back to file.
Wrap your AssertEquals(expectedResult, actualResult) with try catch
in catch
catch(AssertionError e){
//deal with e.getMessage or etc.
}
But it not good idea for some reasons, I guess.
And try google something like soft assert
Documentation on assertEquals is pretty clear on what the method does:
Asserts that two objects are equal. If they are not, an AssertionError
without a message is thrown.
You need to wrap the assertion with try-catch block and in the exception handling do Your logging. You will need to make Your own message using the information from the specific test case, but this what You asked for.
Note:
If expected and actual are null, they are considered equal.
Related
I am working on migration of 3.0 code into new 4.2 framework. I am facing a few difficulties:
How to do CDR level deduplication in new 4.2 framework? (Note: Table deduplication is already done).
Where to implement PostDedupProcessor - context or chainsink custom? In either case, do I need to remove duplicate hashcodes from the list or just reject the tuples? Here I am also doing column updating for a few tuples.
My file is not moving into archive. The temporary output file is getting generated and that too empty and outside load directory. What could be the possible reasons? - I have thoroughly checked config parameters and after putting logs, it seems correct output is being sent from transformer custom, so I don't know where it is stuck. I had printed TableRowGenerator stream for logs(end of DataProcessor).
1. and 2.:
You need to select the type of deduplication. It is not a big difference if you choose "table-" or "cdr-level-deduplication".
The ite.businessLogic.transformation.outputType does affect this. There is one Dedup only. You can not have both.
Select recordStream for "cdr-level-deduplication", do the transformation to table row format (e.g. if you like to use the TableFileWriter) in xxx.chainsink.custom::PostContextDataProcessor.
In xxx.chainsink.custom::PostContextDataProcessor you need to add custom code for duplicate-handling: reject (discard) tuples or set special column values or write them to different target tables.
3.:
Possibly reasons could be:
Missing forwarding of window punctuations or statistic tuple
error in BloomFilter configuration, you would see it easily because PE is down and error log gives hints about wrong sha2 functions be used
To troubleshoot your ITE application, I recommend to enable the following debug sinks if checking the StreamsStudio live graph is not sufficient:
ite.businessLogic.transformation.debug=on
ite.businessLogic.group.debug=on
ite.businessLogic.sink.debug=on
Run a test with a single input file only and check the flow of your record and statistic tuples. "Debug sinks" write punctuations markers also to debug files.
I'm trying to transfer parameter from RawRequest using SoapUI but when reading it, value changes.
The parameter is request ID (which is unique for every test), it is requested by every test case from Custom Properties, where it is stored as follows:
${=((System.currentTimeMillis().toString()).subSequence(4, (System.currentTimeMillis().toString()).length())).replaceFirst("0", "")}
Above generates number like this for example:17879164.
The problem starts, when I'm trying to transfer it using either in build in feature or Groovy script. Both read parameter incorrectly:
Following is how the parameter presents in RawRequest window:
This is how it is read in Transfer window in SoapUI:
And finally, how it is read by Groovy script:
Can any one explain, why this value despite being shown in SoapUI RawRequest window as 17879164 is then read as 17879178 using two different methods?
I think the clue might be, that when I'm using "flat number" as reqId and not the generated one, both methods work fine and return correct number. But in this case when it is RawRequest, I understand that it is set once and for all, so what is show in the window and what is being read, should be the same.
What you are seeing is a "feature" in SoapUI. Your transfer step will transfer the code, which will then get evaluated again, resulting in a different value.
What you need to do is:
Create a test case property.
Set the property from test case setup script to a value. So in your case, something like testCase.setPropertyValue("your_property", ((System.currentTimeMillis().toString()).subSequence(4, (System.currentTimeMillis().toString()).length())).replaceFirst("0", ""))
Anywhere in your test refer to the test case property ${#TestCase#your_property}... which is a fixed value at this point, so will be always the same.
I've a function which is called from different components, .cfms or remotely. It returns the results of a query.
Sometimes the response from this function is manually inspected - a person may want to see the ID of a specific record so they can use it elsewhere.
The provided return formats, being wddx, json, plain all aren't very easily readable for a layman.
I'd love to be able to create a new return format: dump, where the result first writeDumped and then returned to the caller.
I know there'd be more complicated ways of solving this, like writing a function dump, and calling that like a proxy by providing the component, function and parameters so it can call that function and return the results.
However I don't think it's worth going that far. I figured it'd be great if I could just write a new return format, because that's just... intuitive and nice, and I may also be able to use that technique to solve different problems or improve various workflows.
Is there a way to create custom function returnFormats in ColdFusion 10 or 11?
(From comments)
AFAIK, you cannot add a custom returntype to a cffunction, but take a look at OnCFCRequest. Might be able to use it to build something more generic that responds differently whenever a custom URL parameter is passed, ie url.returnformat=yourType. Same net effect as dumping and/or manipulating the result manually, just a little more automated.
From the comments, the return type of the function is query. That being the case, there is simply no need for a custom return format. If you want to dump the query results, do so.
queryVar = objectName.nameOfFunction(arguments);
writeDump (queryVar);
Here is my specific scenario.
I have a class QueryQueue that wraps the QueryTask class within the ArcGIS API for Flex. This enables me to easily queue up multiple query tasks for execution. Calling QueryQueue.execute() iterate through all the tasks in my queue and call their execute method.
When all the results have been received and processed QueryQueue will dispatch the completed event. The interface to my class is very simple.
public interface IQueryQueue
{
function get inProgress():Boolean;
function get count():int;
function get completed():ISignal;
function get canceled():ISignal;
function add(query:Query, url:String, token:Object = null):void;
function cancel():void;
function execute():void;
}
For the QueryQueue.execute method to be considered successful several things must occur.
task.execute must be called on each query task once and only once
inProgress = true while the results are pending
inProgress = false when the results have been processed
completed is dispatched when the results have been processed
canceled is never called
The processing done within the queue correctly processes and packages the query results
What I am struggling with is breaking these tests into readable, logical, and maintainable tests.
Logically I am testing one state, that is the successful execution state. This would suggest one unit test that would assert #1 through #6 above are true.
[Test] public mustReturnQueryQueueEventArgsWithResultsAndNoErrorsWhenAllQueriesAreSuccessful:void
However, the name of the test is not informative as it does not describe all the things that must be true in order to be considered a passing test.
Reading up online (including here and at programmers.stackexchange.com) there is a sizable camp that asserts that unit tests should only have one assertion (as a guideline). As a result when a test fails you know exactly what failed (i.e. inProgress not set to true, completed displayed multiple times, etc.) You wind up with potentially a lot more (but in theory simpler and clearer) tests like so:
[Test] public mustInvokeExecuteForEachQueryTaskWhenQueueIsNotEmpty():void
[Test] public mustBeInProgressWhenResultsArePending():void
[Test] public mustNotInProgressWhenResultsAreProcessedAndSent:void
[Test] public mustDispatchTheCompletedEventWhenAllResultsProcessed():void
[Test] public mustNeverDispatchTheCanceledEventWhenNotCanceled():void
[Test] public mustReturnQueryQueueEventArgsWithResultsAndNoErrorsWhenAllQueriesAreSuccessful:void
// ... and so on
This could wind up with a lot of repeated code in the tests, but that could be minimized with appropriate setup and teardown methods.
While this question is similar to other questions I am looking for an answer for this specific scenario as I think it is a good representation of a complex unit testing scenario exhibiting multiple states and behaviors that need to be verified. Many of the other questions have, unfortunately, no examples or the examples do not demonstrate complex state and behavior.
In my opinion, and there will probably be many, there are a couple of things here:
If you must test so many things for one method, then it could mean your code might be doing too much in one single method (Single Responsibility Principle)
If you disagree with the above, then the next thing I would say is that what you are describing is more of an integration/acceptance test. Which allows for multiple asserts, and you have no problems there. But, keep in mind that this might need to be relegated to a separate section of tests if you are doing automated tests (safe versus unsafe tests)
And/Or, yes, the preferred method is to test each piece separately as that is what a unit test is. The closest thing I can suggest, and this is about your tolerance for writing code just to have perfect tests...Is to check an object against an object (so you would do one assert that essentially tests this all in one). However, the argument against this is that, yes it passes the one assert per test test, but you still lose expressiveness.
Ultimately, your goal should be to strive towards the ideal (one assert per unit test) by focusing on the SOLID principles, but ultimately you do need to get things done or else there is no real point in writing software (my opinion at least :)).
Let's focus on the tests you have identified first. All except the last one (mustReturnQueryQueueEventArgs...) are good ones and I could immediatelly tell what's being tested there (and that's very good sign, indicating they're descriptive and most likely simple).
The only problem is your last test. Note that extensive use of words "and", "with", "or" in test name usually rings problems bell. It's not very clear what it's supposed to do. Return correct results comes to mind first, but one might argue it's vague term? This holds true, it is vague. However you'll often find out that this is indeed pretty common requirement, described in details by method/operation contract.
In your particular case, I'd simplify last test to verify whether correct results are returned and that would be all. You tested states, events and stuff that lead to results building already, so there is no need to that again.
Now, advices in links you provided are quite good ones actually, and generally, I suggest sticking to them (single assertion for one test). The question is, what single assertion really stands for? 1 line of code at the end of test? Let's consider this simple example then:
// a method which updates two fields of our custom entity, MyEntity
public void Update(MyEntity entity)
{
entity.Name = "some name";
entity.Value = "some value";
}
This method contract is to perform those 2 operations. By success, we understand entity to be correctly updated. If one of them for some reasons fails, method as a unit is considered to fail. You can see where this is going; you'll either have two assertions or write custom comparer purely for testing purposes.
Don't be tricked by single assertion; it's not about lines of code or number of asserts (however, in majority of tests you'll write this will indeed map 1:1), but about asserting single unit (in the example above, update is considered to be an unit). And unit might be in reality multiple things that don't make any sense at all without eachother.
And this is exactly what one of questions you linked quotes (by Roy Osherove):
My guideline is usually that you test one logical CONCEPT per test. you can have multiple asserts on the same object. they will usually be the same concept being tested.
It's all about concept/responsibility; not the number of asserts.
I am not familiar with flex, but I think I have good experience in unit testing, so you have to know that unit test is a philosophy, so for the first answer, yes you can make a multiple assert but if you test the same behavior, the main point always in unit testing is to be very maintainable and simple code, otherwise the unit test will need unit test to test it! So my advice to you is, if you are new in unit testing, don't use multiple assert, but if you have good experience with unit testing, you will know when you will need to use them
I'm rewriting a series of PHP functions to a container class. Many of these functions do a bit of processing, but in the end, just echo content to STDOUT.
My question is: should I have a return value within these functions? Is there a "best practice" as far as this is concerned?
In systems that report errors primarily through exceptions, don't return a return value if there isn't a natural one.
In systems that use return values to indicate errors, it's useful to have all functions return the error code. That way, a user can simply assume that every single function returns an error code and develop a pattern to check them that they follow everywhere. Even if the function can never fail right now, return a success code. That way if a future change makes it possible to have an error, users will already be checking errors instead of implicitly silently ignoring them (and getting really confused why the system is behaving oddly).
Can the processing fail? If so, should the caller know about that? If either of these is no, then I don't see value in a return. However, if the processing can fail, and that can make a difference to the caller, then I'd suggest returning a status or error code.
Do not return a value if there is no value to return. If you have some value you need to convey to the caller, then return it but that doesn't sound like the case in this instance.
I will often "return: true;" in these cases, as it provides a way to check that the function worked. Not sure about best practice though.
Note that in C/C++, the output functions (including printf()) return the number of bytes written, or -1 if this fails. It may be worth investigating this further to see why it's been done like this. I confess that
I'm not sure that writing to stdout could practically fail (unless you actively close your STDOUT stream)
I've never seen anyone collect this value, let alone do anything with it.
Note that this is distinct from writing to file streams - I'm not counting stream redirection in the shell.
To do the "correct" thing, if the point of the method is only to print the data, then it shouldn't return anything.
In practice, I often find that having such functions return the text that they've just printed can often be useful (sometimes you also want to send an error message via email or feed it to some other function).
In the end, the choice is yours. I'd say it depends on how much of a "purist" you are about such things.
You should just:
return;
In my opinion the SRP (single responsibility principle) is applicable for methods/functions as well, and not only for objects. One method should do one thing, if it outputs data it shouldn't do any data processing - if it doesn't do processing it shouldn't return data.
There is no need to return anything, or indeed to have a return statement. It's effectively a void function, and it's comprehensible enough that these have no return value. Putting in a 'return;' solely to have a return statement is noise for the sake of pedantry.