The model zoo provided few pre-trained models with several datasets such like PASCAL VOC2012, Cityscapes, ...etc. I am trying to run it on my local and it works as well with validation set because of they are providing the code to convert train/validation set to tfrecord. However, I couldn't test DeepLabV3+ with test set.
Is there any way to run with test set?
Use [inference] and loop over all images.1
Related
I used MALLET's LabeledLDA class to make a model, that I have saved in a binary file. I want to take my test data and see how well the model predicts the appropriate label.
I can only find documentation for the unsupervised LDA here under the infer topics section.
this is the command I'm using:
mallet infer-topics --inferencer fold_0_train.bin --input fold_0_test.txt
I'm getting a Java end of file exception.
How do you test the LLDA with test data?
What format should the test data be in?
Student here. In JUnit 5, what is the best way to invoke a #Nested test class multiple times, but with slightly different state each time?
I see that JUnit 5 has an (experimental) #ParameterizedTest feature that is based on the (non-experimental) #TestTemplate feature, but both of those apply to test methods only, rather than nested test classes.
I have a heavy #Nested test class that needs to be invoked once for each value of an enum (preferably with a distinct #Tag value for each invocation), and I would prefer to avoid the "copy-and-paste" method of parameterization.
It is not currently possible to execute a test class multiple times in JUnit Jupiter.
To participate in the discussion, see the following issue: https://github.com/junit-team/junit5/issues/878
I have test cases defined in an Excel sheet. I am reading a string from this sheet (my expected result) and comparing it to a result I read from a database (my actual result). I then use AssertEquals(expectedResult, actualResult) which prints any errors to a log file (i'm using log4j), e.g. I get java.lang.AssertionError: Different output expected:<10> but was:<7> as a result.
I now need to write that result into the Excel sheet (the one that defines the test cases). If only AssertEquals returned String, with the AssertionError text that would be great, as I could just write that immediately to my Excel sheet. Since it returns void though I got stuck.
Is there a way I could read the AssertionError without parsing the log file?
Thanks.
I think you're using junit incorrectly here. THis is why
assertEquals not AssertEquals ( ;) )
you shouldnt need to log. You should just let the assertions do their job. If it's all green then you're good and you dont need to check a log. If you get blue or red (eclipse colours :)) then you have problems to look at. Blue is failure which means that your assertions are wrong. For example you get 7 but expect 10. Red means error. You have a null pointer or some other exception that is throwing while you are running
You should need to read from an excel file or databse for the unit tests. If you really need to coordinate with other systems then you should try and stub or mock them. With the unit test you should work on trying to testing the method in code
if you are bootstrapping on JUnit to try and compare an excel sheet and database then I would ust export the table in excel as well and then just do a comparison in excel between columns
Reading from/writing to files is not really what tests should be doing. The input for the tests should be defined in the test, not in the external file which can change - this can either introduce false negatives or even worse false positives (making your tests effectively useless while also giving false confidence that everything is ok because tests are green).
Given your comment (a loop with 10k different parameters coming from file), I would recommend converting this excel file into JUnit Parameterized test. You may want to put the array definition in another class, because 10k lines is quite a lot.
If it is some corporate bureaucracy, and you need to have this excel file, then it makes sense to not write a classic "test". I would recommend just a main method that does the job - reads the file, runs the code, checks the output using simple if (output.equals(expected)) and then writes back to file.
Wrap your AssertEquals(expectedResult, actualResult) with try catch
in catch
catch(AssertionError e){
//deal with e.getMessage or etc.
}
But it not good idea for some reasons, I guess.
And try google something like soft assert
Documentation on assertEquals is pretty clear on what the method does:
Asserts that two objects are equal. If they are not, an AssertionError
without a message is thrown.
You need to wrap the assertion with try-catch block and in the exception handling do Your logging. You will need to make Your own message using the information from the specific test case, but this what You asked for.
Note:
If expected and actual are null, they are considered equal.
I have seen various articles about the same issue, Tried a lot of solutions and nothing is working. Kindly advice.
I am getting an error in WEKA:
"Problem Evaluating Classifier: Test and Training Set are Not
Compatible".
I am using
J48 as my algorithm
This is my Test set:
Trainset:
https://www.dropbox.com/s/fm0n1vkwc4yj8yn/train.csv
Evalset:
https://www.dropbox.com/s/2j9jgxnoxr8xjdx/Eval.csv
(I am unable to copy and paste due to long code)
I have tried "Batch Filtering" in WEKA (for Traningset) but it still does not work.
EDIT: I have even converted my .csv to .arff but still the same
issue.
EDIT2: I have made sure the headers in both CSV's match. Even
then same issue. Please help!
Please advice.
A common error in converting ".csv" files to ".arff" with Weka is when values for nominal attributes appear in a different order or not at all from dataset to dataset.
Your evaluation ".arff" file probably looks like this (skipping irrelevant data):
#relation Eval
#attribute a321 {TRUE}
Your train ".arff" file probably looks like this (skipping irrelevant data):
#relation train
#attribute a321 {FALSE}
However, both should contain all possible values for that attribute, and in the same order:
#attribute a321 {TRUE, FALSE}
You can remedy this by post-processing your ".arff" files in a text editor and changing the header so that your nominal values appear in the same order (and quantity) from file to file.
How do I divide a dataset into training and test set?
You can use the RemovePercentage filter (package weka.filters.unsupervised.instance).
In the Explorer just do the following:
training set:
Load the full dataset
select the RemovePercentage filter in the preprocess panel
set the correct percentage for the split
apply the filter
save the generated data as a new file
test set:
Load the full dataset (or just use undo to revert the changes to the dataset)
select the RemovePercentage filter if not yet selected
set the invertSelection property to true
apply the filter
save the generated data as new file
I'm new to Junit. I want to write test cases for if condition,loops.
Do we have any guidelines or procedure to write test cases for if,loop conditions?
Can anyone explain with an example?
IF Age < 18 THEN WHILE Age <> 18
DO ResultResult = Result +1 AgeAge = Age +1 END
DO Print “You can start driving in {Result} years”
ELSE
Print “You can start driving now!”
ENDIF
You want one test case for each major scenario that your code is supposed to be able to handle. With an "if" statement, there are generally two cases, although you might include a third case which is the "boundary" of the two. With a loop, you might want to include a case where the loop is run multiple times, and also a case where the loop is not run at all.
In your particular example, I would write three test cases - one where the age is less than 18, one where the age is exactly 18, and one where the age is over 18. In JUnit, each test case is a separate method inside a test class. Each test method should run the code that you're testing, in the particular scenario, then assert that the result was correct.
Lastly, you need to consider what to call each test method. I strongly recommend using a sentence that indicates which scenario you're testing, and what you expect to happen. Some people like to begin their test method names with the word "test"; but my experience is that this tends to draw attention away from what CONDITION you're trying to test, and draws attention toward which particular method or function it is that you're testing, and you tend to get lower quality tests as a result. For your example, I would call the test methods something like this.
public void canStartDrivingIfAgeOver18()
public void canStartDrivingIfAgeEquals18()
public void numberOfYearsRemainingIsShownIfAgeUnder18()
From my understanding of writing in junit for java ,we were used to create a source code into different blocks is the code conventional,and used to pass the values as args to the function from the test cases so the values will steps into the block statements ,and passes the test cases .
For example you are having the variable as age by assuming as functionName(int age), for testing you should pass the integer from the test case as functionName(18) it will steps into the statements and will show you the status of the test case.Create test case for a testing class write test case for the functions
UseClass classObj=new UseClass();// it should be your class
#Test
public void testValidateAge() {
classObj.validateAge("20");
assertEquals(200,"");
}
Correct me if 'm wrong :)