how can i use input.properties files like concept in TCL script - tcl

in ANt script we access properties file as below
<property file="input.properties"/>
in perl script we access properties file as below
do "config.cfg";
same way how can i access properties file in TCL script.
Can anyone help me out pls?
thanks in advance...

Okay, if you want it as dumb as in Perl, just source the file in Tcl.
Configuration file sample (named config.tcl):
# Set "foo" variable:
set foo bar
To load this configuration file:
source config.tcl
After source-ing, you can access your variable foo in your script.
As with perl, a malicious user might put something like
exec rm -rf ~
in your "config file" and wish you all the good luck.

The equivalent of perls
$var = "test";
is in Tcl
set var "test"
So if you want it as easy as in Perl, I suggest kostix answer.
But you could also try to use dicts as config file:
This will look like
var {hello world}
other_var {Some data}
foo {bar baz}
I personally love using this, it allows even nesting:
nestedvar {
subvar {value1}
subvar2 {value2}
}
And comments: Kind of a hack, in fact has the key #
# {This is a comment}
Parsing:
set fd [open config.file]
set config [read $fd]
close $fd
dict unset config #; # Remove comments.
Access:
puts [dict get $config var]
puts [dict get $config nestedvar subvar]
But if you want really something like $var = "foo"; (which is valid Perl code but not Tcl), then you have to parse this file yourself.
An example:
proc parseConfig {file} {
set fd [open $file]
while {[gets $fd line] != -1} {
if {[regexp {^\s*\$([^\s\=]+)\s*\=\s*(.*);?$} $line -> var value]} {
# The expr parses funny stuff like 1 + 2, \001 inside strings etc.
# But this is NOT perl, so "foo" . "bar" will fail.
set ::$var [expr $value]
}
}
}
Downside: does not allow multi-line settings, will throw an error if there is an invalid value, and allows command injection (but you Perl solution does that too).

The simplest mechanism is to either make it a script or to make it the contents of an array. Here's how to do the latter while still supporting comments:
proc loadProperties {arrayName fileName} {
# Put array in context
upvar 1 $arrayName ary
# Load the file contents
set f [open $fileName]
set data [read $f]
close $f
# Magic RE substitution to remove comment lines
regsub -all -line {^\s*#.*$} $data {} data
# Flesh out the array from the (now clean) file contents
array set ary $data
}
Then you'd use it like this:
loadProperties myProps ~/myapp.props
if {[info exists myProps(debug)] && $myProps(debug)} {
parray myProps
}
With a file in your home directory (called myapp.props) like this:
# Turn on debug mode
debug true
# Set the foos and the bars
foo "abc"
bar "Harry's place downtown"
You can do a lot more complicated than that, but it gives you an easy format to get going with.
If you prefer to use an executable configuration, just do:
# Define an abstraction that we want users to use
proc setProperty {key value} {
# Store in a global associative array, but could be anything you want
set ::props($key) $value
}
source ~/myapp_config.tcl
If you want to restrict the operations to ones that won't cause (much) trouble, you need a slightly more complex approach:
interp create -safe parser
proc SetProp {key value} {
set ::props($key) $value
}
# Make a callback in the safe context to our main context property setter
interp alias parser setProperty {} SetProp
# Do the loading of the file. Note that this can't be invoked directly from
# within the safe context.
interp invokehidden parser source [file normalize ~/myapp_config.tcl]
# Get rid of the safe context; it's now surplus to requirements and contaminated
interp delete parser
Safety has pretty low overhead.

Related

Assigning value to a variable only if argv specified in TCL

I am new to the TCL scripting .I have a script called "Sample.tcl". In the Sample.tcl I have a variable called $name. How can I assign a value to the variable if there exist a specific argv i.e.
Sample.tcl -step xyz
Only if I specify -step then $name should be xyz.
I'm not sure what $name might be in this context (it's a really unusual name for a variable, and using variable variable names is typically a bad idea) but under the guess that you're trying to set step to xyz in this case, you can put this in your script:
apply {{} {
# For each pair of values in the arguments (after the script name)
global argv
foreach {key value} $argv {
# Safety-check: if the key starts with a hyphen...
if {[string match -* $key]} {
# ... strip the leading hyphen(s)
set varname [string trimleft $key "-"]
# ... bind that global var name to a local name
upvar 1 $varname var
# ... and set the variable to the value we've got.
set var $value
}
}
}}
It's done in an apply so that we don't pollute the global namespace with all our working variables (key, value, varname and var) and because we don't really need to make a procedure for something we're only going to do once.
This isn't a safe piece of code, not by any means, but it is a useful and flexible way to get something working.
In general, parsing command line arguments can take quite a bit of thought to get perfectly right and there's various packages to help out, but that's only really important when writing code for other people to run. When it's just for yourself, you can be a lot sloppier and get the job done in a few minutes.
Using the cmdline package from tcllib you could write:
#!/usr/bin/env tclsh
package require cmdline
set options {
{step.arg "" "Set the step value"}
}
try {
array set params [cmdline::getoptions argv $options]
} on error e {
puts stderr $e
exit 1
}
if {$params(step) ne ""} {
set name $params(step)
}
if {[info exists name]} {
puts "name = $name"
} else {
puts "name is not set"
}

execute tcl commands line by line

I have a file like this:
set position {0.50 0.50}
set visibility false
set text {ID: {entity.id}\n Value: {entity.contour_val}}
And I want to do something similar to source, but I want to use a file handle only.
My current attempt looks like this:
proc readArray {fileHandle arrayName} {
upvar $arrayName arr
set cl 0
while {! [eof $fileHandle]} {
set cl [expr "$cl + 1"]
set line [gets $fileHandle]
if [$line eq {}] continue
puts $line
namespace eval ::__esg_priv "
uplevel 1 {*}$line
"
info vars ::__esg_priv::*
foreach varPath [info vars ::__esg_priv::*] {
set varName [string map { ::__esg_priv:: "" } $varPath]
puts "Setting arr($varName) -> [set $varPath]"
set arr($varName) [set $varPath]
}
namespace delete __esg_priv
}
puts "$cl number of lines read"
}
In place of uplevel I tried many combinations of eval and quoting.
My problem is, it either fails on the lines with lists or it does not actuall set the variables.
What is the right way to do it, if the executed commands are expected to be any valid code.
An extra question would be how to properly apply error checking, which I haven't tried yet.
After a call to
readArray [open "myFile.tcl" r] arr
I expect that
parray arr
issues something like:
arr(position) = 0.50 0.50
arr(text) = ID: {entity.id}\n Value: {entity.contour_val}
arr(visibility) = false
BTW: The last line contains internal {}, which are supposed to make it into the string variables. And there is no intent to make this a dict.
This code works, but there are still some problems with it:
proc readArray {fileHandle arrayName} {
upvar $arrayName arr
set cl 0
while {! [eof $fileHandle]} {
incr cl ;# !
set line [gets $fileHandle]
if {$line eq {}} continue ;# !
puts $line
namespace eval ::__esg_priv $line ;# !
foreach varPath [info vars ::__esg_priv::*] {
set varName [string map { ::__esg_priv:: "" } $varPath]
puts "Setting arr($varName) -> [set $varPath]"
set arr($varName) [set $varPath]
}
namespace delete __esg_priv
}
puts "$cl number of lines read"
}
I've taken out a couple of lines that didn't seem necessary, and changed some lines a bit.
You don't need set cl [expr "$cl + 1"]: incr cl will do.
if [$line eq {}] continue will fail because the [...] is a command substitution. if {$line eq {}} continue (braces instead of brackets) does what you intend.
Unless you are accessing variables in another scope, you won't need uplevel. namespace eval ::__esg_priv $line will evaluate one line in the designated namespace.
I didn't change the following, but maybe you should:
set varName [string map { ::__esg_priv:: "" } $varPath] works as intended, but set varName [namespace tail $varPath] is cleaner.
Be aware that if there exists a global variable with the same name as one of the variables in your file, no namespace variable will be created; the global variable will be updated instead.
If you intend to use the value in the text variable as a dictionary, you need to remove either the \n or the braces.
According to your question title, you want to evaluate the file line by line. If that requirement can be lifted, your code could be simplified by reading the whole script in one operation and then evaluating it with a single namespace eval.
ETA
This solution is a lot more robust in that it reads the script in a sandbox (always a good idea when writing code that will execute arbitrary external code) and redefines (within that sandbox) the set command to create members in your array instead of regular variables.
proc readArray {fileHandle arrayName} {
upvar 1 $arrayName arr
set int [interp create -safe]
$int alias set apply {{name value} {
uplevel 1 [list set arr($name) $value]
}}
$int eval [read $fileHandle]
interp delete $int
}
To make it even more safe against unexpected interaction with global variables etc, look at the interp package in the Tcllib. It lets you create an interpreter that is completely empty.
Documentation: apply, continue, eof, foreach, gets, if, incr, info, interp package, interp, list, namespace, proc, puts, set, string, uplevel, upvar, while

Tcl Output redirection from stdout to a file

I know this question has been asked several times here. I have looked at the responses, but couldn't figure out the way it worked.Can you please help me understand.
Here it goes:
I'm trying to source a tcl script on the tclsh command line, and I want to redirect the output of that script into a file.
$ source my_script.tcl
The script my_script.tcl is something like this:
set output_file final_result
set OUT [open $output_file w]
proc calculate{} {
<commands>
return $result
}
foreach value [calculate] {
puts $output_file "$value"
}
This script still throws out the output onto the stdout, while I expected it to redirect the output into a file specified as "final_result"
Can you please help me understand where I went wrong ?
As you've described here, your program looks OK (apart from minor obvious issues). Your problem is that calculate must not write to stdout, but rather needs to return a value. Or list of values in your case, actually (since you're stuffing them through foreach).
Thus, if you're doing:
proc calculate {} {
set result {}
puts [expr {1 + 2 * 3}]
return $result
}
Then you're going to get output written stdout and an empty final_result (since it's an empty list). If you change that to:
proc calculate {} {
set result {}
lappend result [expr {1 + 2 * 3}]
return $result
}
then your code will do as expected. That is, from puts to lappend result. This is what I recommend you do.
You can capture “stdout” by overriding puts. This is a hack!
rename puts _puts
proc puts {args} {
# Detect where we're running. IMPORTANT!
if {[info level] > 1 && [lindex [info level -1] 0] eq "calculate"} {
upvar 1 result r
lappend r [lindex $args end]
} else {
_puts {*}$args
}
return
}
I'm not convinced that the code to detect whether to capture the value is what it ought to be, but it works in informal testing. (It's also possible to capture stdout itself by a few tricks, but the least horrible — a stacked “transformation” that intercepts the channel — takes a lot more code… and the other alternatives are worse in terms of subtleties.)
Assuming that calculate doesn't write to stdout, and all the other good stuff pointed out and suggested by #DonalFellows has been done...
You need to change the puts in the main script to
puts $OUT "$value"
The script as posted writes to a channel named final_result which almost certainly doesn't exist. I'd expect an error from the puts statement inside the foreach loop.
Don't forget to close the output file - either by exiting from the tclsh interpreter, or preferrably by executing
close $OUT
before you check for anything in it,

Determine if design element exists in library with script

I would like to determine whether or not a design element exists (has been compiled) in a given library in ModelSim (I'm using 10.3c PE) using Tcl, but I can't seem to find an appropriate function. Something like this theoretical code:
if {[design_object exists $lib.$entity]} {
...
While not ideal, I can check for certain custom libraries with:
if {[file exists $lib_path]} {
...
This uses a filesystem access, of course, and while ideally I would have liked to check for a logical name, this workaround is good enough for my limited purposes for now.
Unfortunately, there does not seem to be an exact equivalent for design entities, as ModelSim doesn't create individual files for compiled entities. I've considered parsing the library's _info file for the entity name, but that could be a relatively long operation. Is there a built-in way of doing this? Do ModelSim's Tcl extensions even provide access to logical names (outside of a simulation context)?
It looks like the vdir command is what you need to inspect the contents of a Modelsim library programmatically. It returns a multi-line string where each line has an object type followed by the name of the object. Entities can be extracted with the following:
proc get_vdir_entities {lib_name} {
set contents [split [vdir -lib $lib_name] "\n"]
set rval {}
foreach c $contents {
if [regexp "^ENTITY" $c] {
lappend rval [lindex $c 1]
}
}
return $rval
}
set entities [get_vdir_entities "work"]
Earlier solution
Looking at the _info file shows that all compiled entities are recorded as strings with an "E" prefixed to their name. A quick test in a shell produced a comprehensive list for me:
strings _info | sed -n -e "/^E/ p"
It looks like there is a consistent prefix before these strings of "62 20 31 0a" hex and they are terminated with 0a hex. You can do the extraction in pure Tcl with the following:
proc get_modelsim_entities {info_file} {
set fh [open $info_file r]
fconfigure $fh -translation binary
set fields [split [read $fh] "\n"]
close $fh
set rval {}
foreach f $fields {
if [regexp -nocase "^E\[a-z0-9_\]+$" $f] {
lappend rval [string range $f 1 end]
}
}
return $rval
}
set entities [get_modelsim_entities "path/to/your/_info"]

Way to list all procedures in a Tcl file

Is there any way to list all the procedures(proc) in a myFile.tcl using another tcl file or in the same file.
You can use [info procs] before and after sourcing the file in question and compare the results to determine which procs were added. For example:
proc diff {before after} {
set result [list]
foreach name $before {
set procs($name) 1
}
foreach name $after {
if { ![info exists procs($name)] } {
lappend result $name
}
}
return [lsort $result]
}
set __before [info procs]
source myFile.tcl
set __after [info procs]
puts "Added procs: [diff $__before $__after]"
One thing I like about this solution is that the diff procedure is really just a generic set differencing utility -- it's not specific to comparing lists of defined procedures.
The cheapest way is to just open the file and use regexp to pick out the names. It's not perfectly accurate, but it does a reasonably good job.
set f [open "sourcefile.tcl"]
set data [read $f]
close $f
foreach {dummy procName} [regexp -all -inline -line {^[\s:]*proc (\S+)} $data] {
puts "Found procedure $procName"
}
Does it deal with all cases? No. Does it deal with a useful subset? Yes. Is the subset large enough for you? Quite possibly.
Yes it is, although not that easy. The basic idea is to source the file in a modified slave interp that only executes some commands:
proc proc_handler {name arguments body} {
puts $name
}
set i [interp create -safe]
interp eval $i {proc unknown args {}}
interp alias $i proc {} proc_handler
interp invokehidden source yourfile.tcl
This approach will fail if the file requires other packages (package require will not work), relies on the result of some usually auto_load'ed commands etc..
It also does not take namespaces into account. (namespace eval ::foo {proc bar a {}} creates a proc with the name ::foo::bar
For a more complex implementation you could look into auto.tcl's auto_mkindex, which has a similar goal.
Here is a different approach:
Create a temporary namespace
Source (include) the script in question, then
Use the info procs command to get a list of procs
Delete the temporary namespace upon finish
Here is my script, *list_procs.tcl*:
#!/usr/bin/env tclsh
# Script to scan a Tcl script and list all the procs
proc listProcsFromFile {fileName} {
namespace eval TempNamespace {
source $fileName
set procsList [info procs]
}
set result $::TempNamespace::procsList
namespace delete TempNamespace
return $result
}
set fileName [lindex $::argv 0]
set procsList [listProcsFromFile $fileName]
puts "File $fileName contains the following procs: $procsList"
For example, if you have the following script, procs.tcl:
proc foo {a b c} {}
proc bar {a} {}
Then running the script will produce:
$ tclsh list_procs.tcl procs.tcl
File procs.tcl contains the following procs: foo bar