How to make {$DEFINE xxx} visible to other units in FreePascal? - freepascal

I made a PlatformDetection.pas source file with some DEFINE to detect the platform:
{$IF DEFINED(CPUARM)}
{$DEFINE ARM}
{$ELSE}
{$IF DEFINED(i386) or DEFINED(cpui386) or DEFINED(cpux86_64)}
{$DEFINE INTEL}
{$IFEND}
{$ENDIF}
{$IFDEF INTEL}
// Code here is compile, as expected with my processor
{$ENDIF}
This "seems" to work in functions inside that file. However, when used in other units, those symbols seems not defined:
uses PlatformDetection;
...
{$IFDEF INTEL}
// Code here not compiled, even through this same line work when used inside PlatformDetection.pas.
{$ENDIF}
My question is: How to make DEFINE symbols visible in other units?

{$DEFINE} symbols are not visible across unit boundaries. Only symbols that are defined globally in the project settings, or via the command-line -d option, are visible across unit bounaries.
But, what you can do instead is put the {$DEFINE} directives in a .inc file, and then use a {$I} directive in any unit that needs to see the symbols, eg:
PlatformDetection.inc:
{$IF DEFINED(CPUARM)}
{$DEFINE ARM}
{$ELSE}
{$IF DEFINED(i386) or DEFINED(cpui386) or DEFINED(cpux86_64)}
{$DEFINE INTEL}
{$IFEND}
{$ENDIF}
SomeUnit.pas:
{$I 'PlatformDetection.inc'}
...
{$IFDEF INTEL}
// Code here is compiled, as expected with my processor
{$ENDIF}

Related

Triggering an error on unsetting a traced variable

I'm trying to create some read-only variables to use with code evaluated in a safe interp. Using trace, I can generate an error on attempts to set them, but not when using unset:
% set foo bar
bar
% trace add variable foo {unset write} {apply {{var _ op} { error "$var $op trace triggered" }}}
% set foo bar
can't set "foo": foo write trace triggered
% unset foo
%
Indeed, I eventually noticed the documentation even says in passing:
Any errors in unset traces are ignored.
Playing around with different return codes, including custom numbers, they all seem to be ignored. It doesn't trigger an interp bgerror handler either. Is there any other way to raise an error for an attempt to unset a particular variable?
There really isn't. The key problem is that there are times when Tcl is going to unset a variable when that variable really is going to be deleted because its containing structure (a namespace, stack frame or object, and ultimately an interpreter) is also being deleted. The variable is doomed at that point and user code cannot prevent it (except by the horrible approach of never returning from the trace, of course, which infinitely postpones the death and puts everything in a weird state; don't do that). There's simply nowhere to resurrect the variable to. Command deletion traces have the same issue; they too can be firing because their storage is vanishing. (TclOO destructors are a bit more protected against this; they try to not lose errors — there's even pitching them into interp bgerror as a last resort — but still can in some edge cases.)
What's more, there's currently nothing in the API to allow an error message to bubble out of the process of deleting a namespace or call frame. I think that would be fixable (it would require changing some public APIs) but for good reasons I think the deletion would still have to happen, especially for stack frames. Additionally, I'm not sure what should happen when you delete a namespace containing two unset-traced variables whose traces both report errors. What should the error be? I really don't know. (I know that the end result has to be that the namespace is still gone, FWIW, but the details matter and I have no idea what they should be.)
I'm trying to create some read-only variables to use with code evaluated
Schelte and Donal have already offered timely and in-depth feedback. So what comes is meant as a humble addition. Now that one knows that there variables traces are executed after the fact, the below is how I use to mimick read-only (or rather keep-re_setting-to-a-one-time-value) variables using traces (note: as Donal explains, this does not extend to proc-local variables).
The below implementation allows for the following:
namespace eval ::ns2 {}
namespace eval ::ns1 {
readOnly foo 1
readOnly ::ns2::bar 2
readOnly ::faz 3
}
Inspired by variable, but only for one variable-value pair.
proc ::readOnly {var val} {
uplevel [list variable $var $val]
if {![string match "::*" $var]} {
set var [uplevel [list namespace which -variable $var]]
}
# only proceed iff namespace is not under deletion!
if {[namespace exists [namespace qualifiers $var]]} {
set readOnlyHandler {{var val _ _ op} {
if {[namespace exists [namespace qualifiers $var]]} {
if {$op eq "unset"} {
::readOnly $var $val
} else {
set $var $val
}
# optional: use stderr as err-signalling channel?
puts stderr [list $var is read-only]
}
}}
set handlerScript [list apply $readOnlyHandler $var $val]
set traces [trace info variable $var]
set varTrace [list {write unset} $handlerScript]
if {![llength $traces] || $varTrace ni $traces} {
trace add variable $var {*}$varTrace
}
}
}
Some notes:
This is meant to work only for global or otherwise namespaced variables, not for proc-local ones;
It wraps around variable;
[namespace exists ...]: These guards protect from operations when a given parent namespace is currently under deletion (namespace delete ::ns1, or child interp deletion);
In the unset case, the handler script re-adds the trace on the well re-created variable (otherwise, any subsequent write would not be caught anymore.);
[trace info variable ...]: Helps avoid adding redundant traces;
[namespace which -variable]: Makes sure to work on a fully-qualified variable name;
Some final remarks:
Ooo, maybe I can substitute the normal unset for a custom version and
do the checking in it instead of relying on trace
Certainly one option, but it does not give you coverage of the various (indirect) paths of unsetting a variable.
[...] in a safe interp.
You may want to interp alias between a variable in your safe interp to the above readOnly in the parent interp?

Incorrect number of arguments in preprocessor macro when passing CUDA kernel call as argument macro

I have the following macro
#define TIMEIT( variable, body ) \
variable = omp_get_wtime(); \
body; \
variable = omp_get_wtime() - variable;
which I use to very simply time sections of code.
However, macro calls are sensitive to commas, and a CUDA kernel call (using the triple chevron syntax) causes the preprocessor to believe that the macro is being passed more than 2 arguments.
Is there a way around this?
Since C99/C++11, you can use a variadic argument (varargs) macro to solve this problem. You write a varargs macro using ... as the last parameter; in the body of the macro, __VA_ARGS__ will be replaced with the trailing arguments from the macro call, with commas intact:
#define TIMEIT( variable, ... ) \
variable = omp_get_wtime(); \
__VA_ARGS__; \
variable = omp_get_wtime() - variable;
For compilers without varargs macro support, your only alternative is to try to protect all commas by using them only inside parenthetic expressions. Because parentheses protect commas from being treated as macro argument delimiters, many commas are naturally safe. But there are lots of exceptions, such as C++ template argument lists (<…> doesn't protect commas), declarations of multiple objects, and -- as you say -- triple chevron calls. Some of these may be harder to protect than others.
In particular, I don't know if you can put redundant parentheses around a CUDA kernel call, for example. Of course, if nvcc does handle varargs macros, you wouldn't need to. But based on this bug report, I'm not so sure. nvcc is based on the EDG compilet, which is conformant, but it does not seem to have occurred to nvidia to document which version of the standard is being used.

capture vsim exit code or current simulator state with script

I'm trying to write a Tcl script which loads a simulation in ModelSim and then does some other stuff, so it needs to determine if the simulation loaded successfully or not. But the vsim command does not seem to return any value, at least that I can figure out how to capture. As a test, I did:
set rv [vsim $sim_name]
$rv is always empty, regardless of whether the sim loaded or not, so using catch doesn't work. My current workaround is to try something after loading that only works in a simulation context and that does return a value, and catch that instead. For example:
vsim $sim_name
if {[catch {log *} ...
But that's far from ideal. Is there a better way to detect whether or not vsim ran successfully?
For handling elaboration errors at the startup of simulations you can associate a callback using the onElabError command. Your callback can set a global variable that you examine later:
onElabError {global vsim_init_failure; set vsim_init_failure 1}
...
set vsim_init_failure 0
vsim $sim_name
if {$vsim_init_failure} ...

How to interrupt Tcl_eval

I have a small shell application that embeds Tcl 8.4 to execute some set of Tcl code. The Tcl interpreter is initialized using Tcl_CreateInterp. Everything is very simple:
user types Tcl command
the command gets passed to Tcl_Eval for evaluation
repeat
Q: Is there any way to interrupt a very long Tcl_Eval command? I can process a 'Ctrl+C' signal, but how to interrupt Tcl_Eval?
Tcl doesn't set signal handlers by default (except for SIGPIPE, which you probably don't care about at all) so you need to use an extension to the language to get the functionality you desire.
By far the simplest way to do this is to use the signal command from the TclX package (or from the Expect package, but that's rather more intrusive in other ways):
package require Tclx
# Make Ctrl+C generate an error
signal error SIGINT
Just evaluate a script containing those in the same interpreter before using Tcl_Eval() to start running the code you want to be able to interrupt; a Ctrl+C will cause that Tcl_Eval() to return TCL_ERROR. (There are other things you can do — such as running an arbitrary Tcl command which can trap back into your C code — but that's the simplest.)
If you're on Windows, the TWAPI package can do something equivalent apparently.
Here's a demonstration of it in action in an interactive session!
bash$ tclsh8.6
% package require Tclx
8.4
% signal error SIGINT
% puts [list [catch {
while 1 {incr i}
} a b] $a $b $errorInfo $errorCode]
^C1 {can't read "i": no such variableSIGINT signal received} {-code 1 -level 0 -errorstack {INNER push1} -errorcode {POSIX SIG SIGINT} -errorinfo {can't read "i": no such variableSIGINT signal received
while executing
"incr i"} -errorline 2} {can't read "i": no such variableSIGINT signal received
while executing
"incr i"} {POSIX SIG SIGINT}
%
Note also that this can leave the interpreter in a somewhat-odd state; the error message is a little bit odd (and in fact that would be a bug, but I'm not sure what in). It's probably more elegant to do it like this (in 8.6):
% try {
while 1 {incr i}
} trap {POSIX SIG SIGINT} -> {
puts "interrupt"
}
^Cinterrupt
%
Another way to solve this problem would be to fork your tcl interpreter into a separate process and driving the stdin and stdout of the tcl interpreter from your main process. Then, in the main process, you can intercept Ctrl-C and use it to kill the process of your forked tcl interpreter and to refork a new tcl interpreter.
With this solution the tcl interpreter will never lock up on your main program. However, its really annoying to add c-function extension if they need them to run in the main process because you need to use inter-process communication to invoke functions.
I have a similar problem I was trying to solve, where I start the TCL interpret in a worker thread. Except, there's really no clean way to kill a worker thread because it leave allocated memory in an uncleaned up state, leading to memory leaks. So really the only way to fix this problem is to use a process model instead or to just keep quitting and restarting your application. Given the amount of time it takes to go with process solution I just decided to stick with threads and fix the problem one of these days to get the ctrl-c to work in a separate process, rather than leaking memory everytime i kill a thread. and potential destabilizing and crashing my program.
UPDATE:
My conclusion is that Tcl Arrays are not normal variables and you can't use Tcl_GetVar2Ex to read "tmp" variable after Eval and tmp doesn't show up under "info globals". So to get around this I decided to directly call the Tcl-Library API rather than Eval shortcut to build a dictionary object to return.
Tcl_Obj* dict_obj = Tcl_NewDictObj ();
if (!dict_obj) {
return TCL_ERROR;
}
Tcl_DictObjPut (
interp,
dict_obj,
Tcl_NewStringObj ("key1", -1),
Tcl_NewStringObj ("value1", -1)
);
Tcl_DictObjPut (
interp,
dict_obj,
Tcl_NewStringObj ("key2", -1),
Tcl_NewStringObj ("value2", -1)
);
Tcl_SetObjResult(interp, dict_obj);

Weird behavior of tcl comment inside if/else block. Is it a bug of tcl interpreter?

I caught very strange bug(?) which took me almost whole day to find it in real application. In the code there was a elseif block which was commented out and it led to execution of code which (as I thought) could not ever be executed.
I simplified the testcase which reproduces this odd tcl behavior.
proc funnyProc {value} {
if {$value} {
return "TRUE"
# } elseif {[puts "COMMENT :)"] == ""} {
# return "COMMENT"
} else {
return "FALSE"
}
return "IT'S IMPOSSIBLE!!!"
}
puts [funnyProc false]
What do you think this program will output?
The puts in the comment line is executed. It's impossible from any programming language POV.
The line after the block if {...} {return} else {return} is executed as well. It's impossible from true/false logic.
I know that tcl-comment behaves like a command with the name # and consumes all arguments until EOL. And tcl parser do not like unbalanced curly brackets in comments. But this case is out of my understanding.
Maybe I missed something important? How to correctly comment out such elseif blocks, so do not have these weird side-effects?
This is because # is only a comment when Tcl is looking for the start of a command, and the first time it sees it above (when parsing the if), it's looking for a } in order to close the earlier {. This is a consequence of the Tcl parsing rules; if is just a command, not a special construct.
The effect that Ernest noted is because it increases the nesting level of the braces on that line, which makes it part of the argument that runs from the end of the if {$value} { line to the start of the } else { line. Then the # becomes special when if evaluates the script. (Well, except it's all bytecode compiled, but that's an implementation detail: the observed semantics are the same except for some really nasty edge cases.)