Say I have CI tests running via GitHub actions. The program I test has a module that checks whether its input parameters are valid. Hence, I run a test where I intentionally provide improper input parameters, so the program catches this and exits with an error (exit 1).
Problem: I want GitHub Actions to mark this test as success. I am aware of continue-on-error: true for a run. Still, this will mark any failed run as success, no matter whether my program exits intentionally with an error due to improper input parameters (as described above), or because there is a bug in my code which then should actually return a failed CI test. So far I am manually inspecting the Actions logs but there must be an automated way to catch this.
Any ideas?
Related
I am working with a pwn question, and I want to debug v8 using gdb.
But in release version, I can not use job command.
And in a debug version, I will got abort when I called the function which is the main function in this pwn question.
And I have tried to change some #define code, but I failed.
And I tried to pass some compile args, I failed too.
So, how can I solve it?
For Release mode:
The job GDB macro should be functional if you add v8_enable_object_print = true to your args.gn (using gn args out/x64.release). Obviously, debugging a Release-mode binary will be a somewhat "interesting" experience.
For Debug mode:
Bypassing a DCHECK is easy: just comment it out and recompile.
And of course, if you find any bugs, please report them at crbug.com/v8/new :-)
I have written test-automation script in TCL for ModelSim which in its essense runs
vcom -work work -2002 -explicit -source -cover sbce3 something.vhd
# ...
vsim -assertcover -t 10ps -cover -displaymsgmode both -msgmode both "work.something" -quiet
once the simulation is over, I verify that all assertions passed with
set assertion_count [ assertion count -fails -r / ]
if {$assertion_count} {
# failed...
} else {
# success
}
this worked fine for some older ModelSim version (specifically PE 6.5b), but after switching to PE 10.4, assertion_count is always 0, thus my tests always "pass"!
Now the ModelSim PE Command Reference Manual (modelsim_pe_ref.pdf is behind a Mentor login-wall unfortunately), does not even mention the assertion ... command, the HTML manual (e.g. here) does mention it though.
Has something in ModelSim changed recently that breaks above pattern, do I use it wrongly (e.g. with missing parameters to vsim) or is there a better alternative?
I could use coverage report or coverage report -assert -detail for instance, but then I would need to parse the output
# NEVER FAILED: 97.0% ASSERTIONS: 105
I believe that Modelsim has change the default value of the log parameter of the assertions on the new versions.
In the previous versions, it seems that the default configuration of the assertions was with the log option enabled, but in the 10.4 all assertions are not logged when testbench is loaded, and when an assertion is triggered it is reported but it is not registered at the assertions panel (View – Coverage -- Assertions)
I fix this error invoking the logging function of the assertions:
assertion fail -log on -recursive /
It seems that invoking this command at the start of the sequence is enough to enable the logging process and it fix the problems with the assertions count command.
Official reply from ModelSim Support:
The "assertion count" command is not expected to work on Modelsim PE,
as it is not supposed to have assertion coverage metrics enabled.
Generate and parse a coverage report instead.
Whenever an NUnit test fails during its execution (i.e. not when using Assert.*), I want to log additional information (I'm writing web tests and I am especially interested in the web page's current DOM).
How to specify a global exception handler in NUnit which is able to log additional information on NoSuchElementExceptions - test should still fail of course.
You could write an NUnit event listener addin that logs the information. See http://www.nunit.org/index.php?p=nunitAddins&r=2.6.3 and http://www.nunit.org/index.php?p=eventListeners&r=2.6.3. For a tutorial, see https://www.simple-talk.com/dotnet/.net-tools/testing-times-ahead-extending-nunit/.
We are having couple of test cases marked as inconclusive for maintenance, issue is with our Hudson build which is considering Inconclusive test cases as Error.
We have enabled failonerror = "true" in build xml. Guess MsTest is making decision on error status and not Hudson.
is there any command line argument to ignore Inconclusive test as error.
Thanks.
MSTest reports Inconclusive as separate from failure, but returns a execution result of 1 if any tests are inconclusive (unlike NUnit, which does not). The build will interpret the 1 result code as a failure.
There is no command line option to turn this off (see http://msdn.microsoft.com/en-us/library/ms182489.aspx )
It may be possible to turn off the failonerror flag, and add a build step to parse for errors, but if you wish to turn off a test for maintenance, it would be better to use an [Ignore] attribute, like this:
[TestMethod, Ignore]
public void my_test () { ... }
Unlike NUnit, you can't add a reason for the ignore, so best leave a comment.
As per the documents, “assert” will fail the test and abort the current running test case, whereas a “verify” will fail the test and continue to run the test case.
But verifyTrue(false) is not failing the case(rather continue with the next step and mark the case as passed).
Assuming that's a Selenium call, then according to this, "[verify methods] don't stop the test when they fail. Instead, verification errors are all thrown at once during tearDown."