Is there any way to list all the procedures(proc) in a myFile.tcl using another tcl file or in the same file.
You can use [info procs] before and after sourcing the file in question and compare the results to determine which procs were added. For example:
proc diff {before after} {
set result [list]
foreach name $before {
set procs($name) 1
}
foreach name $after {
if { ![info exists procs($name)] } {
lappend result $name
}
}
return [lsort $result]
}
set __before [info procs]
source myFile.tcl
set __after [info procs]
puts "Added procs: [diff $__before $__after]"
One thing I like about this solution is that the diff procedure is really just a generic set differencing utility -- it's not specific to comparing lists of defined procedures.
The cheapest way is to just open the file and use regexp to pick out the names. It's not perfectly accurate, but it does a reasonably good job.
set f [open "sourcefile.tcl"]
set data [read $f]
close $f
foreach {dummy procName} [regexp -all -inline -line {^[\s:]*proc (\S+)} $data] {
puts "Found procedure $procName"
}
Does it deal with all cases? No. Does it deal with a useful subset? Yes. Is the subset large enough for you? Quite possibly.
Yes it is, although not that easy. The basic idea is to source the file in a modified slave interp that only executes some commands:
proc proc_handler {name arguments body} {
puts $name
}
set i [interp create -safe]
interp eval $i {proc unknown args {}}
interp alias $i proc {} proc_handler
interp invokehidden source yourfile.tcl
This approach will fail if the file requires other packages (package require will not work), relies on the result of some usually auto_load'ed commands etc..
It also does not take namespaces into account. (namespace eval ::foo {proc bar a {}} creates a proc with the name ::foo::bar
For a more complex implementation you could look into auto.tcl's auto_mkindex, which has a similar goal.
Here is a different approach:
Create a temporary namespace
Source (include) the script in question, then
Use the info procs command to get a list of procs
Delete the temporary namespace upon finish
Here is my script, *list_procs.tcl*:
#!/usr/bin/env tclsh
# Script to scan a Tcl script and list all the procs
proc listProcsFromFile {fileName} {
namespace eval TempNamespace {
source $fileName
set procsList [info procs]
}
set result $::TempNamespace::procsList
namespace delete TempNamespace
return $result
}
set fileName [lindex $::argv 0]
set procsList [listProcsFromFile $fileName]
puts "File $fileName contains the following procs: $procsList"
For example, if you have the following script, procs.tcl:
proc foo {a b c} {}
proc bar {a} {}
Then running the script will produce:
$ tclsh list_procs.tcl procs.tcl
File procs.tcl contains the following procs: foo bar
Related
I was wondering how you would find the name of the test you're running in tcl from the test itself? I couldn't find this on google.
I'm calling another proc and passing the name of the test that is calling it, as an argument. So I would like to know which tcl command can do that for me.
This isn't an encouraged use caseā¦ but you can use info frame 1 to get the information if you use it directly inside the test.
proc example {contextTest} {
puts "Called from $contextTest"
return ok
}
tcltest::test foo-1.1 {testing the foo} {
example [lindex [dict get [info frame 1] cmd] 1]
} ok
This assumes that you're using Tcl 8.5 or later, but Tcl 8.5 is the oldest currently-supported Tcl version so that's a reasonable restriction.
I read your comments ("source ... instade of my test name") as follows: You seem to source the Tcl script file containing the tests (and Donal's instrumented tcltest), rather than batch-running the script from the command line: tclsh /path/to/your/file.tcl In this setting, there will be an extra ("eval") stack frame which distorts introspection.
To turn Donal's instrumentation more robust, I suggest actually walking the Tcl stack and watching out for a valid tcltest frame. This could look as follows:
package req tcltest
proc example {} {
for {set i 1} {$i<=[info frame]} {incr i} {
set frameInfo [info frame $i]
set frameType [dict get $frameInfo type]
set cmd [dict get $frameInfo cmd]
if {$frameType eq "source" && [lindex $cmd 0] eq "tcltest::test"} {
puts "Called from [lindex $cmd 1]"
return ok
}
}
return notok
}
tcltest::test foo-1.1 {testing the foo} {
example
} ok
This will return "Called from foo-1.1" both, when called as:
$ tclsh test.tcl
Called from foo-1.1
and
$ tclsh
% source test.tcl
Called from foo-1.1
% exit
The Tcl version used (8.5, 8.6) is not relevant. However, your are advised to upgrade to 8.6, 8.5 has reached its end of life.
So I have the following situation:
$ ls -l
-r--r----- 1.tcl
-rw-rw---- 2.tcl
$ cat 1.tcl
proc foo {args} {
puts "$bar"
}
and I need to make 1.tcl print something other than "can't read \"bar\"". In a good programming language, the obvious solution would be
$ cat > 2.tcl
set -global bar "hello, world"
foo
What would be a reasonable workaround in TCL? Unfortunately the real foo is a long function that I can't really make a copy of or sed to a temporary file at runtime.
You can do this for your specific example
$ cat 2.tcl
source 1.tcl
set bar "Hello, bar!"
# add a "global bar" command to the foo procedure
proc foo [info args foo] "global bar; [info body foo]"
foo
$ tclsh 2.tcl
Hello, bar!
Clearly this doesn't scale very well.
If the variable is simply undefined, the easiest way would be to patch the procedure with a definition:
proc foo [info args foo] "set bar \"hello, world\" ; [info body foo]"
You can also accomplish this using a read trace and a helper command. This removes the problem I mentioned above, where local assignments destroy the value you wanted to inject.
The original procedure, with an added command that sets the local variable to a value which is later printed.
proc foo args {
set bar foobar
puts "$bar"
}
% foo
foobar
Create a global variable (it doesn't matter if the name is the same or not).
set bar "hello, world"
Create a helper command that gets the name of the local variable, links to it, and assigns the value of the global variable to it. Since we already know the name we could hardcode it in the procedure, but this is more flexible.
proc readbar {name args} {
upvar 1 $name var
global bar
set var $bar
}
Add the trace to the body of the foo procedure. The trace will fire whenever the local variable bar is read, i.e. something attempts to retrieve its value. When the trace fires, the command readbar is called: it overwrites the current value of the variable with the globally set value.
proc foo [info args foo] "trace add variable bar read readbar; [info body foo]"
% foo
hello, world
If one doesn't want to pollute the namespace with the helper command, one can use an anonymous function instead:
proc foo [info args foo] [format {trace add variable bar read {apply {{name args} {
upvar 1 $name var
global bar
set var $bar
}}} ; %s} [info body foo]]
Documentation:
apply,
format,
global,
info,
proc,
puts,
set,
trace,
upvar,
Syntax of Tcl regular expressions
source 1.tcl
try {
foo
} on error {err res} {
set einfo [dict get $res -errorinfo]
if { [regexp {no such variable} $einfo] } {
puts "hello, world"
return -code 0
} else {
puts $einfo
return -code [dict get $res -code]
}
}
Tcl's procedures do not resolve variables to anything other than local variables by default. You have to explicitly ask for them to refer to something else (e.g., with global, variable or upvar). This means that it's always possible to see at a glance whether non-local things are happening, and that the script won't work.
It's possible to override this behaviour with a variable resolver, but Tcl doesn't really expose that API in its script interface. Some extensions do more. For example, it might work to use [incr Tcl] (i.e., itcl) as that does that sort of thing for variables in its objects. I can't remember if Expect also does this, or if that uses special-cased code for handling its variables.
Of course, you could get really sneaky and override the behaviour of proc.
rename proc real_proc
real_proc proc {name arguments body} {
uplevel 1 [list real_proc $name $arguments "global bar;$body"]
}
That's rather nasty though.
I have a file like this:
set position {0.50 0.50}
set visibility false
set text {ID: {entity.id}\n Value: {entity.contour_val}}
And I want to do something similar to source, but I want to use a file handle only.
My current attempt looks like this:
proc readArray {fileHandle arrayName} {
upvar $arrayName arr
set cl 0
while {! [eof $fileHandle]} {
set cl [expr "$cl + 1"]
set line [gets $fileHandle]
if [$line eq {}] continue
puts $line
namespace eval ::__esg_priv "
uplevel 1 {*}$line
"
info vars ::__esg_priv::*
foreach varPath [info vars ::__esg_priv::*] {
set varName [string map { ::__esg_priv:: "" } $varPath]
puts "Setting arr($varName) -> [set $varPath]"
set arr($varName) [set $varPath]
}
namespace delete __esg_priv
}
puts "$cl number of lines read"
}
In place of uplevel I tried many combinations of eval and quoting.
My problem is, it either fails on the lines with lists or it does not actuall set the variables.
What is the right way to do it, if the executed commands are expected to be any valid code.
An extra question would be how to properly apply error checking, which I haven't tried yet.
After a call to
readArray [open "myFile.tcl" r] arr
I expect that
parray arr
issues something like:
arr(position) = 0.50 0.50
arr(text) = ID: {entity.id}\n Value: {entity.contour_val}
arr(visibility) = false
BTW: The last line contains internal {}, which are supposed to make it into the string variables. And there is no intent to make this a dict.
This code works, but there are still some problems with it:
proc readArray {fileHandle arrayName} {
upvar $arrayName arr
set cl 0
while {! [eof $fileHandle]} {
incr cl ;# !
set line [gets $fileHandle]
if {$line eq {}} continue ;# !
puts $line
namespace eval ::__esg_priv $line ;# !
foreach varPath [info vars ::__esg_priv::*] {
set varName [string map { ::__esg_priv:: "" } $varPath]
puts "Setting arr($varName) -> [set $varPath]"
set arr($varName) [set $varPath]
}
namespace delete __esg_priv
}
puts "$cl number of lines read"
}
I've taken out a couple of lines that didn't seem necessary, and changed some lines a bit.
You don't need set cl [expr "$cl + 1"]: incr cl will do.
if [$line eq {}] continue will fail because the [...] is a command substitution. if {$line eq {}} continue (braces instead of brackets) does what you intend.
Unless you are accessing variables in another scope, you won't need uplevel. namespace eval ::__esg_priv $line will evaluate one line in the designated namespace.
I didn't change the following, but maybe you should:
set varName [string map { ::__esg_priv:: "" } $varPath] works as intended, but set varName [namespace tail $varPath] is cleaner.
Be aware that if there exists a global variable with the same name as one of the variables in your file, no namespace variable will be created; the global variable will be updated instead.
If you intend to use the value in the text variable as a dictionary, you need to remove either the \n or the braces.
According to your question title, you want to evaluate the file line by line. If that requirement can be lifted, your code could be simplified by reading the whole script in one operation and then evaluating it with a single namespace eval.
ETA
This solution is a lot more robust in that it reads the script in a sandbox (always a good idea when writing code that will execute arbitrary external code) and redefines (within that sandbox) the set command to create members in your array instead of regular variables.
proc readArray {fileHandle arrayName} {
upvar 1 $arrayName arr
set int [interp create -safe]
$int alias set apply {{name value} {
uplevel 1 [list set arr($name) $value]
}}
$int eval [read $fileHandle]
interp delete $int
}
To make it even more safe against unexpected interaction with global variables etc, look at the interp package in the Tcllib. It lets you create an interpreter that is completely empty.
Documentation: apply, continue, eof, foreach, gets, if, incr, info, interp package, interp, list, namespace, proc, puts, set, string, uplevel, upvar, while
The script have sourced N number of files..,
source file 1
source file 2
.
.
source file N
when particular procedure A called ., Its actually present in most of the sourced files., anyway the last sourced file containing that proc A will do the function.,
how to find which file containing the proc is used when i call the proc ?
Any code i can use to achieve it ?
The simplest way (assuming Tcl 8.5 or 8.6) is to use an execution trace to call info frame to get the details of the call stack.
trace add execution A enter callingA
proc callingA args {
set ctxt [info frame -1]
if {[dict exists $ctxt file] && [dict exists $ctxt proc]} {
puts "Called [lindex $args 0 0] from [dict get $ctxt proc] in [dict get $ctxt file]"
} elseif {[dict exists $ctxt proc]} {
puts "Called [lindex $args 0 0] from [dict get $ctxt proc] (unknown location)"
} else {
# Fallback
puts "Called [lindex $args 0 0] from within [file normalize [info script]]"
}
}
There's quite a bit of other information in the dictionary returned by info frame.
For Tcl 8.4
In Tcl 8.4, you don't have info frame and Tcl doesn't remember where procedures are defined by default. You still have execution traces though (they were a new feature of Tcl 8.4) so that's OK then. (We have to be a bit careful with info script as that's only valid during the source and not after it finishes; procedures tend to be called later.)
To get where every procedure is defined, you have to intercept proc itself, and to do so early in your script execution! (Procedures defined before you set up the interceptor aren't noticed; Tcl's semantics are purely operational.) Fortunately, you can use an execution trace for this.
proc procCalled {cmd code args} {
if {$code==0} {
global procInFile
set procName [uplevel 1 [list namespace which [lindex $cmd 1]]]
set procInFile($procName) [file normalize [info script]]
}
}
# We use a leave trace for maximum correctness
trace add execution proc leave procCalled
Then, you use another execution trace on the command that you want to know the callers of to look up what that command is called, and hence where it was defined.
proc callingA args {
# Wrap in a catch so a lookup failure doesn't cause problems
if {[catch {
set caller [lindex [info level -1] 0]
global procInFile
set file $procInFile($caller)
puts "Calling [lindex $args 0 0] from $caller in $file"
}]} {
# Not called from procedure!
puts "Calling [lindex $args 0 0] from within [file normalize [info script]]"
}
}
trace add execution A enter callingA
in ANt script we access properties file as below
<property file="input.properties"/>
in perl script we access properties file as below
do "config.cfg";
same way how can i access properties file in TCL script.
Can anyone help me out pls?
thanks in advance...
Okay, if you want it as dumb as in Perl, just source the file in Tcl.
Configuration file sample (named config.tcl):
# Set "foo" variable:
set foo bar
To load this configuration file:
source config.tcl
After source-ing, you can access your variable foo in your script.
As with perl, a malicious user might put something like
exec rm -rf ~
in your "config file" and wish you all the good luck.
The equivalent of perls
$var = "test";
is in Tcl
set var "test"
So if you want it as easy as in Perl, I suggest kostix answer.
But you could also try to use dicts as config file:
This will look like
var {hello world}
other_var {Some data}
foo {bar baz}
I personally love using this, it allows even nesting:
nestedvar {
subvar {value1}
subvar2 {value2}
}
And comments: Kind of a hack, in fact has the key #
# {This is a comment}
Parsing:
set fd [open config.file]
set config [read $fd]
close $fd
dict unset config #; # Remove comments.
Access:
puts [dict get $config var]
puts [dict get $config nestedvar subvar]
But if you want really something like $var = "foo"; (which is valid Perl code but not Tcl), then you have to parse this file yourself.
An example:
proc parseConfig {file} {
set fd [open $file]
while {[gets $fd line] != -1} {
if {[regexp {^\s*\$([^\s\=]+)\s*\=\s*(.*);?$} $line -> var value]} {
# The expr parses funny stuff like 1 + 2, \001 inside strings etc.
# But this is NOT perl, so "foo" . "bar" will fail.
set ::$var [expr $value]
}
}
}
Downside: does not allow multi-line settings, will throw an error if there is an invalid value, and allows command injection (but you Perl solution does that too).
The simplest mechanism is to either make it a script or to make it the contents of an array. Here's how to do the latter while still supporting comments:
proc loadProperties {arrayName fileName} {
# Put array in context
upvar 1 $arrayName ary
# Load the file contents
set f [open $fileName]
set data [read $f]
close $f
# Magic RE substitution to remove comment lines
regsub -all -line {^\s*#.*$} $data {} data
# Flesh out the array from the (now clean) file contents
array set ary $data
}
Then you'd use it like this:
loadProperties myProps ~/myapp.props
if {[info exists myProps(debug)] && $myProps(debug)} {
parray myProps
}
With a file in your home directory (called myapp.props) like this:
# Turn on debug mode
debug true
# Set the foos and the bars
foo "abc"
bar "Harry's place downtown"
You can do a lot more complicated than that, but it gives you an easy format to get going with.
If you prefer to use an executable configuration, just do:
# Define an abstraction that we want users to use
proc setProperty {key value} {
# Store in a global associative array, but could be anything you want
set ::props($key) $value
}
source ~/myapp_config.tcl
If you want to restrict the operations to ones that won't cause (much) trouble, you need a slightly more complex approach:
interp create -safe parser
proc SetProp {key value} {
set ::props($key) $value
}
# Make a callback in the safe context to our main context property setter
interp alias parser setProperty {} SetProp
# Do the loading of the file. Note that this can't be invoked directly from
# within the safe context.
interp invokehidden parser source [file normalize ~/myapp_config.tcl]
# Get rid of the safe context; it's now surplus to requirements and contaminated
interp delete parser
Safety has pretty low overhead.