Why does the following code:
#!/usr/bin/env tclsh
package require cmdline;
set options {{d.arg "" "destination directory"}}
set usage ": $::argv0 \[options] filename ...\noptions:"
set params [::cmdline::getoptions ::argv $options $usage]
throw the following error upon execution of ./main.tcl -help?
main : ./main.tcl [options] filename ...
options:
-d value destination directory <>
-help Print this message
-? Print this message
while executing
"error [usage $optlist $usage]"
(procedure "::cmdline::getoptions" line 15)
invoked from within
"::cmdline::getoptions ::argv $options $usage"
invoked from within
"set params [::cmdline::getoptions ::argv $options $usage]"
(file "./main.tcl" line 8)
It should display the usage information, but I didn't expect the error afterwards. Did I do something wrong?
From what I understand from the docs (emphasis mine):
The options -?, -help, and -- are implicitly understood. The first two abort option processing by throwing an error and force the generation of the usage message, whereas the the last aborts option processing without an error, leaving all arguments coming after for regular processing, even if starting with a dash.
using -help or -? will always throw an error.
Further down in the docs you can see an example where try { ... } trap { ... } is being used in conjunction with ::cmdline::getoptions, which might be how you might want to do it:
try {
array set params [::cmdline::getoptions ::argv $options $usage]
} trap {CMDLINE USAGE} {msg o} {
# Trap the usage signal, print the message, and exit the application.
# Note: Other errors are not caught and passed through to higher levels!
puts $msg
exit 1
}
Related
In my TCL script I am trying to provide input argument which will be parsed using 'cmdline' package. I have defined the following in my script:
set run_options {
{input_1.arg 8 "first input argument"}
{input_2.arg 1 "second input argument"}
{msb "choose it if input data has msb first"}
{lsb "choose it if input data has lsb first"}
}
set my_usage ": \n tclsh my_src \[options] ...\noptions are:"
if {[catch {array set params [::cmdline::getoptions argv $run_options $my_usage]} msg]} {
puts "Error while parsing user options of $msg"
exit 1
}
everything look fine except that when running my script I figured out that the 'cmdline' package defines "help" option by default, but when I run my script with "-help" option the following is the output:
Error while parsing user options of my_src :
tclsh my_src [options] ...
options are:
-input_1 value first input argument <8>
-input_2 value second input argument <1>
-msb choose it if input data has msb first
-lsb choose it if input data has lsb first
-- Forcibly stop option processing
-help Print this message
-? Print this message
so, does anyone know how to use this automatically added "help" option without giving error message?
cmdline::getoptions doesn't differentiate between -help and an unknown option (An unrecognized option is treated like it was -?). If you want to have different messages for the two cases, you need to use one of the the lower level functions directly in a loop, or something like the following enhanced version of cmdline::getoptions:
#!/usr/bin/env tclsh
package require try ;# In case you're still using tcl 8.5 for some reason
package require cmdline 1.5
proc better_getoptions {arglistVar optlist usage} {
upvar 1 $arglistVar argv
# Warning: Internal cmdline function
set opts [::cmdline::GetOptionDefaults $optlist result]
while {[set err [::cmdline::getopt argv $opts opt arg]]} {
if {$err < 0} {
return -code error -errorcode {CMDLINE ERROR} $arg
}
set result($opt) $arg
}
if {[info exists result(?)] || [info exists result(help)]} {
return -code error -errorcode {CMDLINE USAGE} \
[::cmdline::usage $optlist $usage]
}
return [array get result]
}
set run_options {
{input_1.arg 8 "first input argument"}
{input_2.arg 1 "second input argument"}
{msb "choose it if input data has msb first"}
{lsb "choose it if input data has lsb first"}
}
set my_usage ": \n tclsh my_src \[options] ...\noptions are:"
try {
array set params [better_getoptions argv $run_options $my_usage]
puts "All options valid"
} trap {CMDLINE USAGE} {msg} {
puts stderr $msg
exit 0
} trap {CMDLINE ERROR} {msg} {
puts stderr "Error: $msg"
exit 1
}
Example usage:
$ ./example.tcl --help
cmd :
tclsh my_src [options] ...
options are:
-input_1 value first input argument <8>
-input_2 value second input argument <1>
-msb choose it if input data has msb first
-lsb choose it if input data has lsb first
-- Forcibly stop option processing
-help Print this message
-? Print this message
$ ./example.tcl -foo
Error: Illegal option "-foo"
$ ./example.tcl -input_1
Error: Option "input_1" requires an argument
$ ./example.tcl -msb
All options valid
In my Tcl/Tk script, there is one step to remove some txt file.
I use:
exec rm file1.txt
But if the file dose not exist, then error message will come up which will block the script usage.
What I want to do is remove the file if it exist, and if it does not exist, to skip the error.
Is there a good way of doing this?
Ok, I find the answer: file exists filename works well for this case.
You could use
file delete file1.txt
where trying to delete a non-existent file is not considered an error.
How to avoid having an error stop your program.
The "0th" solution is to use commands that don't raise errors. such as glob -nocomplain instead of just glob, or in this case file delete file1.txt as suggested by timrau.
In some cases it's impossible to prevent errors from being raised. In those cases you can choose from several strategies. Assume you need to call mycmd, and it might raise errors.
# Tcl 8.6
try mycmd on error {} {}
# Tcl 8.4 or later
catch mycmd
This invocation quietly intercepts the error and lets your program continue. This is perfectly acceptable if the error isn't important, e.g. when you attempt to discard a variable that might not exist (catch {unset myvar}).
You might want to take some action when an error is raised, such as reporting it to yourself (as an error message on stderr or in a message box, or in a log of some kind) or by dealing with the error somehow.
try mycmd on error msg {puts stderr "There was a problem: $msg"}
if {[catch mycmd msg]} {
puts stderr "There was a problem: $msg"
}
You might want to take some action only if there was no error:
try {
mycmd
} on ok res {
puts "mycmd returned $res"
} on error msg {
puts stderr "There was a problem: $msg"
}
if {[catch mycmd res]} {
puts stderr "There was a problem: $res"
} else {
puts "mycmd returned $res"
}
For instance, this invocation returns the contents of a file, or the empty string if the file doesn't exist. It makes sure that the channel is closed and the variable holding the channel identifier are destroyed in either case:
set txt [try {
open $filename
} on ok f {
chan read $f
} on error msg {
puts stderr $msg
} finally {
catch {chan close $f}
catch {unset f}
}]
Documentation: catch, chan, file, glob, if, open, puts, set, try
Consider this code:
set status [catch {eval exec $Executable $options | grep "(ERROR_|WARNING)*" ># stdout} errorMessage]
if { $status != 0 } {
return -code error ""
}
In case of errors in the child process, they are outputted in the stdout. But if there are no errors in the child process, the status value still non-zero. How avoid this?
Also is there are some ways to use fileutil::grep instead of bash grep?
In case of errors in the child process, they are outputted in the stdout. But if there are no errors in the child process, the status value still non zero. How avoid this?
There's no connection between writing something to any file descriptor (including the one connected to the "standadrd error stream") and returning a non-zero exit code as these concepts are completely separate as far as an OS is concerned. A process is free to perform no I/O at all and return a non-zero exit code (a somewhat common case for Unix daemons, which log everything, including errors, through syslog), or to write something to its standard error stream and return zero when exiting — a common case for software which write certain valuable data to their stdout and provide diagnostic messages, when requested, to their stderr.
So, first verify your process writes nothing to its standard error and still exits with non-zero exit code using plain shell
$ that_process --its --command-line-options and arguments if any >/dev/null
$ echo $?
(the process should print nothing, and echo $? should print a non-zero number).
If the case is true, and you're sure the process does not think something is wrong, you'll have to work around it using catch and processing the extended error information it returns — ignoring the case of the process exiting with the known exit code and propagating every other error.
Basically:
set rc [catch {exec ...} out]
if {$rc != 0} {
global errorCode errorInfo
if {[lindex $errorCode 0] ne "CHILDSTATUS"} {
# The error has nothing to do with non-zero process exit code
# (for instance, the program wasn't found or the current user
# did not have the necessary permissions etc), so pass it up:
return -code $rc -errorcode $errorCode -errorinfo $errorInfo $out
}
set exitcode [lindex $errorCode 2]
if {$exitcode != $known_exit_code} {
# Unknown exit code, propagate the error:
return -code $rc -errorcode $errorCode -errorinfo $errorInfo $out
}
# OK, do nothing as it has been a known exit code...
}
CHILDSTATUS (and the errorCode global variable in general) is described here.
What is actually the difference between raising an exception in TCL via return -code error ...and error ...? When would one be used instead of the other?
The error command produces an error right at the current point; it's great for the cases where you're throwing a problem due to a procedure's internal state.
The return -code error command makes the procedure it is placed in produce an error (as if the procedure was error); it's great for the case where there's a problem with the arguments passed to the procedure (i.e., the caller did something wrong).
The difference really comes when you look at the stack trace.
Here's a (contrived!) example:
proc getNumberFromFile {filename} {
if {![file readable $filename]} {
return -code error "could not read $filename"
}
set f [open $filename]
set content [read $f]
close $f
if {![regexp -- {-?\d+} $content number]} {
error "no number present in $filename"
}
return $number
}
catch {getNumberFromFile no.such.file}
puts $::errorInfo
#could not read no.such.file
# while executing
#"getNumberFromFile no.such.file"
catch {getNumberFromFile /dev/null}
puts $::errorInfo
#no number present in /dev/null
# while executing
#"error "no number present in $filename""
# (procedure "getNumberFromFile" line 9)
# invoked from within
#"getNumberFromFile /dev/null"
In my code I am using environment variables, but if it (env.var) doesn't exist, I get the error message NAME_ENV_VAR: no such variable, and my script stops executing.
For example, in the line
myeval $env($File)
I receive an error:
can't read "env(NIKE_TECH_DIR)": no such variable
while executing
"myeval $env($File)"
(procedure "chooseRelevantFiles" line 39)
invoked from within
"chooseRelevantFiles $::GlobalVars::reqStage"
(file "/vobs/tavor/src/Scripts/ReproduceBug.tcl" line 575)
How can I avoid this error and go on to execute my script?
You could test with info exists and use a default if the environment variable is not set, eg.
if {[info exists env($File)]} {
set filename $env($File)
} else {
set filename /some/default/path
}
myeval $filename
catch the error then you can do something with it (e.g. log it for later, or use a fall back value) and proceed with your script
e.g.
if {[catch {myeval $env($File)} result]} {
lappend log $result
}
#other stuff
To check for of an array element like global env array, don't use [info exists $env(VAR)].
Instead, you should use:
if { [ array names env VAR ] != "" } {
puts "\nVAR exists and its value is $env(VAR)\n"
}