Why my tracing of TCL execution does not stop at compile functions? - tcl

I have the following script, ~/tmp/2.tcl:
proc p1 {a} {
if {[expr $a + 10] > 100} {
puts hi
}
}
set a [p1 200]
I have a debug-build TCL v8.6.1, I like to trace what is going on inside TCL execution when I issue "./tclsh8.6 ~/tmp/2.tcl", so I use gdb to trace the the execution (inside gdb, set args ~/tmp.2.tcl,)
What puzzled me are:
1. In `TclEvalEx`(), it is command by command parsing and execution, I do not see any
script/command compiling.
2. I set breakpoints at `TclAttemptCompileProc(), TclCompileObj()` and `TclCompileExpr`(),
they are not triggered.
What do I miss here? Why isn't there any script compiling?
Here is the backtrace of running TclEvalEx:
#0 TclEvalEx (interp=0x613680, script=0x674950 "proc p1 {a} {\n if {[expr $a + 10] > 100} {\n puts hi\n }\n}\n\nset a [p1 200]\n\n", numBytes=87, flags=0, line=1, clNextOuter=0x0,
outerScript=0x674950 "proc p1 {a} {\n if {[expr $a + 10] > 100} {\n puts hi\n }\n}\n\nset a [p1 200]\n\n") at ~/tcl8.6.1/source/generic/tclBasic.c:4935
#1 0x00007ffff7af0812 in Tcl_FSEvalFileEx (interp=0x613680, pathPtr=0x65beb0, encodingName=0x0) at ~/tcl8.6.1/source/generic/tclIOUtil.c:1809
#2 0x00007ffff7afb88f in Tcl_MainEx (argc=-1, argv=0x7fffffffde08, appInitProc=0x400963 <Tcl_AppInit>, interp=0x613680) at ~/tcl8.6.1/source/generic/tclMain.c:417
#3 0x000000000040095c in main (argc=2, argv=0x7fffffffddf8) at ~/tcl8.6.1/source/unix/tclAppInit.c:84
[UPDATE] I am not sure what was going wrong, now the breakpoints do get triggered.

The compiler has quite a few internal entry points — it's not in any way a public API, and is subject to alteration without anyone announcing it — and TclSetByteCodeFromAny and TclCompileScript appear to be among the ones that you've missed. There are others too; it's actually awkward to list them all. You probably instead ought to set a breakpoint on TclInitCompileEnv which is the standard internal function used to set up the structure used by the compiler; anything that calls it is going to be of interest to you.
FWIW, the call to proc doesn't compile the body of the procedure. That's postponed until the code is needed, i.e., until the procedure is called. The call to TclEvalEx that you were seeing won't do much meaningful compilation directly.
Also, the non-recursive execution engine used in Tcl 8.6.* makes it much harder to debug with a tool like gdb. The C stack does not reflect the Tcl stack at all.
Good luck.

Related

TCL equivalent to Python's `if __name__ == "__main__"`

In one executable TCL script I'm defining a variable that I'd like to import in another executable TCL script. In Python one can make a combined library and executable by using the following idiom at the bottom of one's script:
# Library
if __name__ == "__main__":
# Executable that depends on library
pass
Is there something equivalent for TCL? There is for Perl.
The equivalent for Tcl is to compare the ::argv0 global variable to the result of the info script command.
if {$::argv0 eq [info script]} {
# Do the things for if the script is run as a program...
}
The ::argv0 global (technically a feature of the standard tclsh and wish shells, or anything else that calls Tcl_Main or Tk_Main at the C level) has the name of the main script, or is the empty string if there is no main script. The info script command returns the name of the file currently being evaluated, whether that's by source or because of the main shell is running it as a script. They'll be the same thing when the current script is the main script.
As mrcalvin notes in the comments below, if your library script is sometimes used in contexts where argv0 is not set (custom shells, child interpreters, embedded interpreters, some application servers, etc.) then you should add a bit more of a check first:
if {[info exists ::argv0] && $::argv0 eq [info script]} {
# Do the things for if the script is run as a program...
}
I recently wanted this functionality to set up some unit tests for my HDL build scripts suite. This is what i ended up with for Vivado:
proc is_main_script {} { ;# +1 frame
set frame [info frame [expr [info frame] -3]]
if {![dict exists $frame file]} {
set command [file normalize [lindex [dict get $frame cmd] 1]]
set script [file normalize [info script]]
if {$script eq $command} {
return 1
} else {
return 0
}
} else {
return 0
}
}
if {is_main_script} { ;# +1 frame
puts "do your thing"
}
As I consider this for test/demo i consider the main use case to be something in the line with if {is_main_script} {puts "do something"} "un nested" at the end of the file.
If a need to make it more general a dynamic handle for the frame reference -3 could probably be developed. All though this has covered all my use cases so far.
frame -3 is used as proc and if creates extra frames and to evaluate this we want to check the call before.
dict exists is used to check if file exists within the frame. This would indicate the call was from a higher hierarchical level script and would there for not be the "main_script"
The solution if {[info exists ::argv0] && $::argv0 eq [info script]} works great if run as vivado -source TCLSCRIPT.tcl but the solution above covers source TCLSCRIPT.tcl in gui or tcl mode (this is something i often se my self doing when debugging a automation tcl).
I guess this is a niche case. But since I couldn't find any other solution for this problem I wanted to leave this here.

Procedure tracking in log with proc renaiming

I have a custom test tool written in tcl and bash (mainly in tcl, some initial config and check were done by bash). It isn't have an exact starting point, the outside bash (and sometimes the application which is tested) call specific functions which they find with a "tclIndex" file, created by auto_mkindex.
This tool create a log file, with many "puts" function, which is directed to the file location.
Most of the functions have a "trackBegin" function call at the beginning, and one "trackEnd" function at the end of it. These two get the functions name as parameter. This help us to track where is the problem.
Sadly, this tracker was forgotten in some modification in the near past, and its not even too reliable because its not going to track if there is any abnormal exit in the function. Now, i tried to remove all of them, and create a renamed _proc to override the original and place this two tracker before and after the execution of the function itself.
But i have a lots of error (some i solved, but i dont know its the best way, some are not solved at all, so i'm stuck), these are the main ones:
Because there is no exact entry point, where should i define and how, this overrided proc, to work on all of the procedures in this execution? Some of my files had to be manually modified to _proc to work (mostly the ones where there are code outside the procedures and these files as scripts were called, not functions through the tclIndex, the function called ones are all in a utils folder, and only there, maybe it can help).
In the tracker line i placed a "clock" with format, and its always cause abnormal exit.
I had problems with the returned values (if there was one, and some time when there isn't). Even when that was a return, or Exit.
So my question is in short:
How can i solve an overrided proc function, which will write into a logfile a "begin" and "end" block before and after the procedure itself (The log file location was gained from the bash side of this tool), when there is no clear entry point in this tool for the tcl side, and use an auto_mkindex generated procedure index file?
Thanks,
Roland.
Untested
Assuming your bash script does something like
tclsh file.tcl
You could do
tclsh instrumented.tcl file.tcl
where instrumented.tcl would contain
proc trackBegin {name} {...}
proc trackEnd {name output info} {...}
rename proc _proc
_proc proc {name args body} {
set new_body [format {
trackBegin %s
catch {%s} output info
trackEnd %s $output $info
} $name $body $name]
_proc $name $args $new_body
}
source [lindex $argv 0]
See the return and catch pages for what to do with the info dictionary.
You'll have to show us some of your code to provide more specific help, particularly for your clock error.
I'd be tempted to use execution tracing for this, with the addition of the execution tracing being done in an execution trace on proc (after all, it's just a regular Tcl command). In particular, we can do this:
proc addTracking {cmd args} {
set procName [lindex $cmd 1]
uplevel 1 [list trace add execution $procName enter [list trackBegin $procName]]
uplevel 1 [list trace add execution $procName leave [list trackEnd $procName]]
}
proc trackBegin {name arguments operation} {
# ignore operation, arguments might be interesting
...
}
proc trackEnd {name arguments code output operation} {
# ignore operation, arguments might be interesting
...
}
trace add execution proc leave addTracking
It doesn't give you quite the same information, but it does allow you to staple code around the outside non-invasively.

Is there any C scope like utility in tcl

In software development it is often very useful to be able to find the callers of a function because this is the way to understand how code works and what other parts of the program expect from a function. cscope can find the callers and callees of functions, but it is not a compiler and it does that by searching the text for keywords.
I am wondering if there is any such utility in tcl?
Because you can do generation of code at runtime very easily in Tcl, and many APIs use callbacks, it's rather hard to determine statically where a command is called from. Simple searching of the code is probably the simplest way (with a recursive grep on Unixes, and findstr /s on Windows).
However, determining where a command is called from at runtime is much easier, as you can use an execution trace on the command of interest and introspect the call stack at that point (with info level and info frame).
proc foo args {bar $args $args}
proc bar args {puts ">>$args<<"}
proc caller args {
puts "caller-call: [info level -1]"
puts "caller-info: [info frame -1]"
}
trace add execution bar enter caller
foo [expr 1+3] [llength {s p q r}]
Running that interactively gives the output:
caller-call: foo 4 4
caller-info: type eval line 1 cmd {caller {bar {4 4} {4 4}} enter} proc ::foo level 1
>>{4 4} {4 4}<<
You'll get even more if you put it in a file.

How to interrupt Tcl_eval

I have a small shell application that embeds Tcl 8.4 to execute some set of Tcl code. The Tcl interpreter is initialized using Tcl_CreateInterp. Everything is very simple:
user types Tcl command
the command gets passed to Tcl_Eval for evaluation
repeat
Q: Is there any way to interrupt a very long Tcl_Eval command? I can process a 'Ctrl+C' signal, but how to interrupt Tcl_Eval?
Tcl doesn't set signal handlers by default (except for SIGPIPE, which you probably don't care about at all) so you need to use an extension to the language to get the functionality you desire.
By far the simplest way to do this is to use the signal command from the TclX package (or from the Expect package, but that's rather more intrusive in other ways):
package require Tclx
# Make Ctrl+C generate an error
signal error SIGINT
Just evaluate a script containing those in the same interpreter before using Tcl_Eval() to start running the code you want to be able to interrupt; a Ctrl+C will cause that Tcl_Eval() to return TCL_ERROR. (There are other things you can do — such as running an arbitrary Tcl command which can trap back into your C code — but that's the simplest.)
If you're on Windows, the TWAPI package can do something equivalent apparently.
Here's a demonstration of it in action in an interactive session!
bash$ tclsh8.6
% package require Tclx
8.4
% signal error SIGINT
% puts [list [catch {
while 1 {incr i}
} a b] $a $b $errorInfo $errorCode]
^C1 {can't read "i": no such variableSIGINT signal received} {-code 1 -level 0 -errorstack {INNER push1} -errorcode {POSIX SIG SIGINT} -errorinfo {can't read "i": no such variableSIGINT signal received
while executing
"incr i"} -errorline 2} {can't read "i": no such variableSIGINT signal received
while executing
"incr i"} {POSIX SIG SIGINT}
%
Note also that this can leave the interpreter in a somewhat-odd state; the error message is a little bit odd (and in fact that would be a bug, but I'm not sure what in). It's probably more elegant to do it like this (in 8.6):
% try {
while 1 {incr i}
} trap {POSIX SIG SIGINT} -> {
puts "interrupt"
}
^Cinterrupt
%
Another way to solve this problem would be to fork your tcl interpreter into a separate process and driving the stdin and stdout of the tcl interpreter from your main process. Then, in the main process, you can intercept Ctrl-C and use it to kill the process of your forked tcl interpreter and to refork a new tcl interpreter.
With this solution the tcl interpreter will never lock up on your main program. However, its really annoying to add c-function extension if they need them to run in the main process because you need to use inter-process communication to invoke functions.
I have a similar problem I was trying to solve, where I start the TCL interpret in a worker thread. Except, there's really no clean way to kill a worker thread because it leave allocated memory in an uncleaned up state, leading to memory leaks. So really the only way to fix this problem is to use a process model instead or to just keep quitting and restarting your application. Given the amount of time it takes to go with process solution I just decided to stick with threads and fix the problem one of these days to get the ctrl-c to work in a separate process, rather than leaking memory everytime i kill a thread. and potential destabilizing and crashing my program.
UPDATE:
My conclusion is that Tcl Arrays are not normal variables and you can't use Tcl_GetVar2Ex to read "tmp" variable after Eval and tmp doesn't show up under "info globals". So to get around this I decided to directly call the Tcl-Library API rather than Eval shortcut to build a dictionary object to return.
Tcl_Obj* dict_obj = Tcl_NewDictObj ();
if (!dict_obj) {
return TCL_ERROR;
}
Tcl_DictObjPut (
interp,
dict_obj,
Tcl_NewStringObj ("key1", -1),
Tcl_NewStringObj ("value1", -1)
);
Tcl_DictObjPut (
interp,
dict_obj,
Tcl_NewStringObj ("key2", -1),
Tcl_NewStringObj ("value2", -1)
);
Tcl_SetObjResult(interp, dict_obj);

prevent call stack to be outputted

Is it possible to prevent call stack to be outputted when error occurred. So for example suppose:
set error [catch { [exec $interpName $tmpFileName] } err]
if { $error ne 0 } {
puts "err = $err" #<---- Here call stack is also outputted
}
So output now looks like:
error: some error message
while executing
[stack trace]
Tcl automatically builds up the call stack in the global variable errorInfo (and, since 8.5, in the -errorinfo member of the interpreter result options dictionary) but it is up to the calling code to decide what to do with it. The default behavior of tclsh is to print it out; other Tcl-hosting environments can do different things (it's usually recommended to print it out as it helps hunt down bugs; on the other hand, some programs — specifically Eggdrop — don't and it's a cause of much trouble when debugging scripts).
You take control of this for yourself by using catch in the script that's getting the original error. The easiest way to do this is to put the real code in a procedure (e.g., called main by analogy with C and C++) and then to use a little bit of driver code around the outside:
if {[catch {eval main $argv} msg]} {
puts "ERROR: $msg"
# What you're not doing at this point is:
# puts $errorInfo
exit 1
} else {
# Non-error case; adjust to taste
puts "OK: $msg"
exit 0
}
Note that in your code, this would go inside the script you write to $tmpFileName and not in the outer driver code that you showed (which is absolutely fine and needs no adjustment that I can think of).