Behaviour of `assertion count` in different ModelSim versions - tcl

I have written test-automation script in TCL for ModelSim which in its essense runs
vcom -work work -2002 -explicit -source -cover sbce3 something.vhd
# ...
vsim -assertcover -t 10ps -cover -displaymsgmode both -msgmode both "work.something" -quiet
once the simulation is over, I verify that all assertions passed with
set assertion_count [ assertion count -fails -r / ]
if {$assertion_count} {
# failed...
} else {
# success
}
this worked fine for some older ModelSim version (specifically PE 6.5b), but after switching to PE 10.4, assertion_count is always 0, thus my tests always "pass"!
Now the ModelSim PE Command Reference Manual (modelsim_pe_ref.pdf is behind a Mentor login-wall unfortunately), does not even mention the assertion ... command, the HTML manual (e.g. here) does mention it though.
Has something in ModelSim changed recently that breaks above pattern, do I use it wrongly (e.g. with missing parameters to vsim) or is there a better alternative?
I could use coverage report or coverage report -assert -detail for instance, but then I would need to parse the output
# NEVER FAILED: 97.0% ASSERTIONS: 105

I believe that Modelsim has change the default value of the log parameter of the assertions on the new versions.
In the previous versions, it seems that the default configuration of the assertions was with the log option enabled, but in the 10.4 all assertions are not logged when testbench is loaded, and when an assertion is triggered it is reported but it is not registered at the assertions panel (View – Coverage -- Assertions)
I fix this error invoking the logging function of the assertions:
assertion fail -log on -recursive /
It seems that invoking this command at the start of the sequence is enough to enable the logging process and it fix the problems with the assertions count command.

Official reply from ModelSim Support:
The "assertion count" command is not expected to work on Modelsim PE,
as it is not supposed to have assertion coverage metrics enabled.
Generate and parse a coverage report instead.

Related

GitHub Actions -- mark intentionally failed runs as success

Say I have CI tests running via GitHub actions. The program I test has a module that checks whether its input parameters are valid. Hence, I run a test where I intentionally provide improper input parameters, so the program catches this and exits with an error (exit 1).
Problem: I want GitHub Actions to mark this test as success. I am aware of continue-on-error: true for a run. Still, this will mark any failed run as success, no matter whether my program exits intentionally with an error due to improper input parameters (as described above), or because there is a bug in my code which then should actually return a failed CI test. So far I am manually inspecting the Actions logs but there must be an automated way to catch this.
Any ideas?

i want to use job command in v8.release, so how can i do it ? or just by pass the dcheck within v8.debug

I am working with a pwn question, and I want to debug v8 using gdb.
But in release version, I can not use job command.
And in a debug version, I will got abort when I called the function which is the main function in this pwn question.
And I have tried to change some #define code, but I failed.
And I tried to pass some compile args, I failed too.
So, how can I solve it?
For Release mode:
The job GDB macro should be functional if you add v8_enable_object_print = true to your args.gn (using gn args out/x64.release). Obviously, debugging a Release-mode binary will be a somewhat "interesting" experience.
For Debug mode:
Bypassing a DCHECK is easy: just comment it out and recompile.
And of course, if you find any bugs, please report them at crbug.com/v8/new :-)

compiling cuda using cmake works only after calling make twice

I want to use cmake to compile CUDA with '-arch=sm_12', but cmake/make behave strangely.
I have following CMakeLists.txt:
CMAKE_MINIMUM_REQUIRED(VERSION 2.8)
PROJECT(test)
FIND_PACKAGE(CUDA REQUIRED)
CUDA_ADD_EXECUTABLE(test prog.cu)
SET(CUDA_NVCC_FLAGS "-arch=sm_12")
SET(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS} CACHE STRING "Forced" FORCE)
but 'cmake ../src && make' leads to a executable for sm_20.
The flag seems to be ignored.
EDIT: If I call 'make' again (without any modification in CMakeListss.txt) it uses the Flag. - But only if I force the flag to cache (last line)
Am I doing anything wrong?
EDIT: After checking again: I have to call 'make' twice to work correctly. Does anybody know this behaviour?
inJeans was right:
FindCUDA-docs https://cmake.org/cmake/help/v3.3/module/FindCUDA.html
This is the essential information:
"Note that any of these flags can be changed multiple times in the same directory before calling CUDA_ADD_EXECUTABLE, CUDA_ADD_LIBRARY, CUDA_COMPILE, CUDA_COMPILE_PTX, CUDA_COMPILE_FATBIN, CUDA_COMPILE_CUBIN or CUDA_WRAP_SRCS:"

How should one deal with a new Tcl assertion (introduced in 8.5.18) that fires upon an IO operation?

Our Tcl-based web application (OpenACS, NaviServer) provides the functionality for uploading and extracting ZIP Archives. After upgrading to the latest version of Tcl (8.5.18), the server now crashes when processing the contents of the extracted archive and spits out this error.
nsd: /usr/local/src/tcl/tcl8.5.18/unix/../generic/tclIO.c:5395: DoReadChars: Assertion `!((statePtr)->flags & ((1<<9))) || ((statePtr)->flags & ((1<<10))) || Tcl_InputBuffered((Tcl_Channel)chanPtr) == 0' failed.
This assertion has been introduced between Tcl 8.5.17 and 8.5.18. Is the assertion wrong or too rigorous, or does this hint at some form of error at the application level?
It turns out that I was running into a known bug that was fixed in April 2015 (http://core.tcl.tk/tcl/info/879a0747bee593e2). When Tcl 8.5.19 is released, using that will make my troubles go away. Before that, one can work from Tcl development sources, or try the patch in isolation (http://core.tcl.tk/tcl/info/4b964e7afb811898).

Where does jruby log to?

I see a lot of config options like jit.logging=true, and I want to watch out for things like when the jvm gives CodeCache is full. Compiler has been disabled messages, where does jruby log this stuff? Better yet, how can I tell it which file to log to? Is it just STDOUT and STDERR?
By setting JRuby properties that affects JIT Runtime Properties ( such as: jruby.jit.logging, jruby.jit.logging, jruby.jit.logging ) you get log to standard error (commonly abbreviated stderr)
You could tell which file to log to by redirecting stderr to a specific file; for example:
jruby -J-Djruby.jit.logging=true myscript.rb 2> myfile.log
beware, however, myfile.log receives even other stderr outputs; i.e if myscript.rb
executes statements such as:
$stderr.puts "print this in stderr"
you will see "print this in stderr" in myfile.log