In software development it is often very useful to be able to find the callers of a function because this is the way to understand how code works and what other parts of the program expect from a function. cscope can find the callers and callees of functions, but it is not a compiler and it does that by searching the text for keywords.
I am wondering if there is any such utility in tcl?
Because you can do generation of code at runtime very easily in Tcl, and many APIs use callbacks, it's rather hard to determine statically where a command is called from. Simple searching of the code is probably the simplest way (with a recursive grep on Unixes, and findstr /s on Windows).
However, determining where a command is called from at runtime is much easier, as you can use an execution trace on the command of interest and introspect the call stack at that point (with info level and info frame).
proc foo args {bar $args $args}
proc bar args {puts ">>$args<<"}
proc caller args {
puts "caller-call: [info level -1]"
puts "caller-info: [info frame -1]"
}
trace add execution bar enter caller
foo [expr 1+3] [llength {s p q r}]
Running that interactively gives the output:
caller-call: foo 4 4
caller-info: type eval line 1 cmd {caller {bar {4 4} {4 4}} enter} proc ::foo level 1
>>{4 4} {4 4}<<
You'll get even more if you put it in a file.
Related
In one executable TCL script I'm defining a variable that I'd like to import in another executable TCL script. In Python one can make a combined library and executable by using the following idiom at the bottom of one's script:
# Library
if __name__ == "__main__":
# Executable that depends on library
pass
Is there something equivalent for TCL? There is for Perl.
The equivalent for Tcl is to compare the ::argv0 global variable to the result of the info script command.
if {$::argv0 eq [info script]} {
# Do the things for if the script is run as a program...
}
The ::argv0 global (technically a feature of the standard tclsh and wish shells, or anything else that calls Tcl_Main or Tk_Main at the C level) has the name of the main script, or is the empty string if there is no main script. The info script command returns the name of the file currently being evaluated, whether that's by source or because of the main shell is running it as a script. They'll be the same thing when the current script is the main script.
As mrcalvin notes in the comments below, if your library script is sometimes used in contexts where argv0 is not set (custom shells, child interpreters, embedded interpreters, some application servers, etc.) then you should add a bit more of a check first:
if {[info exists ::argv0] && $::argv0 eq [info script]} {
# Do the things for if the script is run as a program...
}
I recently wanted this functionality to set up some unit tests for my HDL build scripts suite. This is what i ended up with for Vivado:
proc is_main_script {} { ;# +1 frame
set frame [info frame [expr [info frame] -3]]
if {![dict exists $frame file]} {
set command [file normalize [lindex [dict get $frame cmd] 1]]
set script [file normalize [info script]]
if {$script eq $command} {
return 1
} else {
return 0
}
} else {
return 0
}
}
if {is_main_script} { ;# +1 frame
puts "do your thing"
}
As I consider this for test/demo i consider the main use case to be something in the line with if {is_main_script} {puts "do something"} "un nested" at the end of the file.
If a need to make it more general a dynamic handle for the frame reference -3 could probably be developed. All though this has covered all my use cases so far.
frame -3 is used as proc and if creates extra frames and to evaluate this we want to check the call before.
dict exists is used to check if file exists within the frame. This would indicate the call was from a higher hierarchical level script and would there for not be the "main_script"
The solution if {[info exists ::argv0] && $::argv0 eq [info script]} works great if run as vivado -source TCLSCRIPT.tcl but the solution above covers source TCLSCRIPT.tcl in gui or tcl mode (this is something i often se my self doing when debugging a automation tcl).
I guess this is a niche case. But since I couldn't find any other solution for this problem I wanted to leave this here.
Is it possible to pass between TCL threads (created with TCL command - thread::create) commands created in C (i.e. with Tcl_CreateObjCommand) and how?
Thanks.
All Tcl commands are always coupled to a specific interpreter, the interpreter passed to Tcl_CreateObjCommand as its first parameter, and Tcl interpreters are strictly bound to threads (because the Tcl implementation uses quite a few thread-specific variables internally in order to reduce the number of global locks). Instead, the implementation coordinates between threads by means of messages; the most common sort of message is “here is a Tcl script to run for me” and “here are the results of running that script” though there are others.
So no, Tcl commands can't be shared between threads. If you've written the code for them right (often by avoiding globals or adding in appropriate locks) you can use the same command implementation in multiple interpreters in multiple threads, but they're not technically the same command, but rather just look the same at first glance. For example, if you put a trace on the command in one thread, that'll only get its callbacks invoked in that one interpreter, not from any other interpreter that has a command with the same implementation and with the same name.
You can make a delegate command in the other threads that asks the main thread to run the command and send you the results back.
package require Thread
# This procedure makes delegates; this is a little messy...
proc threadDelegateCommand {thread_id command_name} {
# Relies on thread IDs always being “nice” words, which they are
thread::send $thread_id [list proc $command_name args "
thread::send [thread::id] \[list [list $command_name] {*}\$args\]
"]
}
# A very silly example; use your code here instead
proc theExampleCommand {args} {
puts "This is in [thread::id] and has [llength $args] arguments: [join $args ,]"
return [tcl::mathop::+ {*}$args]
}
# Make the thread
set tid [thread::create]
puts "This is [thread::id] and $tid has just been created"
# Make the delegate for our example
threadDelegateCommand $tid theExampleCommand
# Show normal execution in the other thread
puts [thread::send $tid {format "This is %s" [thread::id]}]
# Show that our delegate can call back. IMPORTANT! Note that we're using an asynchronous
# send here to avoid a deadlock due to the callbacks involved.
thread::send -async $tid {
after 5000
theExampleCommand 5 4 3 2 1
} foo
vwait foo
I have a custom test tool written in tcl and bash (mainly in tcl, some initial config and check were done by bash). It isn't have an exact starting point, the outside bash (and sometimes the application which is tested) call specific functions which they find with a "tclIndex" file, created by auto_mkindex.
This tool create a log file, with many "puts" function, which is directed to the file location.
Most of the functions have a "trackBegin" function call at the beginning, and one "trackEnd" function at the end of it. These two get the functions name as parameter. This help us to track where is the problem.
Sadly, this tracker was forgotten in some modification in the near past, and its not even too reliable because its not going to track if there is any abnormal exit in the function. Now, i tried to remove all of them, and create a renamed _proc to override the original and place this two tracker before and after the execution of the function itself.
But i have a lots of error (some i solved, but i dont know its the best way, some are not solved at all, so i'm stuck), these are the main ones:
Because there is no exact entry point, where should i define and how, this overrided proc, to work on all of the procedures in this execution? Some of my files had to be manually modified to _proc to work (mostly the ones where there are code outside the procedures and these files as scripts were called, not functions through the tclIndex, the function called ones are all in a utils folder, and only there, maybe it can help).
In the tracker line i placed a "clock" with format, and its always cause abnormal exit.
I had problems with the returned values (if there was one, and some time when there isn't). Even when that was a return, or Exit.
So my question is in short:
How can i solve an overrided proc function, which will write into a logfile a "begin" and "end" block before and after the procedure itself (The log file location was gained from the bash side of this tool), when there is no clear entry point in this tool for the tcl side, and use an auto_mkindex generated procedure index file?
Thanks,
Roland.
Untested
Assuming your bash script does something like
tclsh file.tcl
You could do
tclsh instrumented.tcl file.tcl
where instrumented.tcl would contain
proc trackBegin {name} {...}
proc trackEnd {name output info} {...}
rename proc _proc
_proc proc {name args body} {
set new_body [format {
trackBegin %s
catch {%s} output info
trackEnd %s $output $info
} $name $body $name]
_proc $name $args $new_body
}
source [lindex $argv 0]
See the return and catch pages for what to do with the info dictionary.
You'll have to show us some of your code to provide more specific help, particularly for your clock error.
I'd be tempted to use execution tracing for this, with the addition of the execution tracing being done in an execution trace on proc (after all, it's just a regular Tcl command). In particular, we can do this:
proc addTracking {cmd args} {
set procName [lindex $cmd 1]
uplevel 1 [list trace add execution $procName enter [list trackBegin $procName]]
uplevel 1 [list trace add execution $procName leave [list trackEnd $procName]]
}
proc trackBegin {name arguments operation} {
# ignore operation, arguments might be interesting
...
}
proc trackEnd {name arguments code output operation} {
# ignore operation, arguments might be interesting
...
}
trace add execution proc leave addTracking
It doesn't give you quite the same information, but it does allow you to staple code around the outside non-invasively.
What I want to do is parse an argument to a tcl proc as a string without any evaluation.
For example if I had a trivial proc that just prints out it's arguments:
proc test { args } {
puts "the args are $args"
}
What I'd like to do is call it with:
test [list [expr 1+1] [expr 2+2]]
And NOT have tcl evaluate the [list [expr 1+1] [expr 2+2]]. Or even if it evaluated
it I'd still like to have the original command line. Thus with the trivial "test"
proc above I'd like to be able to return:
the args are [list [expr 1+1] [expr 2+2]]
Is this possible in tcl 8.4?
You cannot do this with Tcl 8.4 (and before); the language design makes this impossible. The fix is to pass in arguments unevaluated (and enclosed in braces). You can then print them however you like. To get their evaluated form, you need to do this inside your procedure:
set evaluated_x [uplevel 1 [list subst $unevaluated_x]]
That's more than a bit messy!
If you were using Tcl 8.5, you'd have another alternative:
set calling_code [dict get [info frame -1] cmd]
The info frame -1 gets a dictionary holding a description of the current command in the context that called the current procedure, and its cmd key is the actual command string prior to substitution rules being applied. That should be about what you want (though be aware that it includes the command name itself).
This is not available for 8.4, nor will it ever be backported. You might want to upgrade!
When passing the arguments into test, enclose them in braces, e.g.:
test {[list [expr 1+1] [expr 2+2]]}
Is there a way to manipulate non-global variables from a fileevent handler? Consider the following minimal server:
proc initState {stateName} {
upvar $stateName state
set state(foo) bar
set state(baz) bla
# ...
return
}
proc handleConnection {stateName newsock clientAddress clientPort} {
upvar $stateName state
fconfigure $newsock -blocking 0
fconfigure $newsock -buffering line
fileevent $newsock readable [list handleData $newsock]
return
}
proc handleData {f} {
if {[eof $f]} {
fileevent $f readable {}
close $f
return
}
gets $f line
puts $f ok
# need to modify state here...
return
}
proc runServer {port} {
array set state {}
initState state
socket -server {handleConnection state} $port
vwait forever
}
runServer 1234
Is there any possibility to manipulate the state array created in the scope of runServer or is the only way to do this making state a global variable?
I'm pretty new to Tcl, if I were using C I would simply pass a pointer to state into the event handler but unfortunately Tcl does not allow that. Am I doing anything weird here, is there a more Tcl-ish way?
That's simply not going to work. The issue is that Tcl's stack frames do not persist in the way that what you want would require.
The standard options to work around this are:
Keep the state in a global array that is indexed by a "connection token" (e.g., the name of the channel). Remember that arrays are indexed by strings; composite keys like “sock42,hostname” are quite legal.
Keep the state in a namespace named after the connection token. If you're using Tcl 8.5, the namespace upvar command makes this much easier.
Keep the state in a TclOO object (requires Tcl 8.6 or the separate TclOO package for 8.5) or use a different object system (e.g., [incr Tcl], XOTcl; these are available for many Tcl versions).
Keep the state in a coroutine (requires Tcl 8.6). This effectively gives you a named stack (and lets you write your code so it is apparently “straight line” instead of driven by callback) but its version requirement is strict.