Suppose if I have the following glob command wrapped in try/trap/finally:
proc generateSubmissionFolder {cover_letter_resume submission_path} {
set submission_parent [file dirname $submission_path]
set latest_submission_folder [lindex [lsort [glob -directory $submission_parent -type d *]] end]
set latest_submission_file [lindex [glob -directory $latest_submission_folder *[file extension $cover_letter_resume]] end]
createSubmissionFolder $latest_submission_file $submission_path
}
proc createSubmissionFolder {source destination} {
puts "Creating $destination folder."
file mkdir $destination
puts "Copying $source to $destination"
file copy $source $destination
}
try {
# I gathered user input and stored them in the variables $company_name and $position.
set submission_path [file join $company_name $position $yearmonthday]
if {[file exists [file dirname $submission_path]]} {
generateSubmissionFolder $coverletterresume $submission_path
} else {
createSubmissionFolder $coverletterresume $submission_path
}
} trap {Value Empty} {errormessage} {
puts "$errormessage"
} finally {
puts "$argv0 exiting."
}
If no folder is found, I would like to provide a human readable error message, but I'm unsure of what error to trap. According to the answer to my previous question:
Tcl doesn't have a pre-defined hierarchy of exceptions.
The only work around I tried was to use the -nocomplain switch and afterward check if the latest_submission_folder was blank.
Is there a way to trap a FileNotFound or FolderNotFound error?
For trivial cases like your example, use an on error handler, not trap. Or use catch instead of try.
Example tclsh session:
% try { glob *.bar } on error {what} { puts "Ooops: $what" }
Ooops: no files matched glob pattern "*.bar"
% if {[catch { glob *.bar } result] == 1} { puts "Ooops: $result" }
Ooops: no files matched glob pattern "*.bar"
Or if you do want to use trap because you also want to handle a bunch of other possible specific errors from more complicated code, glob raises a TCL OPERATION GLOB NOMATCH on failure:
% try { glob *.bar } trap {TCL OPERATION GLOB NOMATCH} {msg} { puts "Ooops: $msg" }
Ooops: no files matched glob pattern "*.bar"
You can discover what to use with trap for any given command's particular error with something like:
% catch { glob *.bar } result errdict
1
% dict get $errdict -errorcode
TCL OPERATION GLOB NOMATCH
In this specific case, glob has an option that helps: -nocomplain. It turns off the error on no match — which was only ever really intended for interactive use — as a lot of use cases can cope with an empty returned list just fine. (It's the way it is for historical reasons, and is maintained that way so we don't break the large number of existing scripts that use it. As language warts go, it's not too terrible.)
set latest_submission_folder [glob -directory $submission_parent -type d *]
if {![llength $latest_submission_folder]} {
# Make your own handling right here. Whatever you want.
puts "I didn't find a directory inside $submission_parent"
} elseif {[llength $latest_submission_folder] > 1} {
# Many subdirectories found. Is this an error given your single-thing var name?
} else {
# Strip a layer of list; good idea in case directory name has a space in it!
set latest_submission_folder [lindex $latest_submission_folder 0]
}
Related
I'm modifying the code below, but I have no idea how it works - enlightenment welcome. The issue is that there is a proc in it (cygwin_prefix) which is meant to create a command, by either
leaving a filename unmodified, or
prepending a string to the filename
The problem is that the proc returns nothing, but the script magically still works. How? Specifically, how does the line set command [cygwin_prefix filter_g] actually manage to correctly set command?
For background, the script simply execs filter_g < foo.txt > foo.txt.temp. However, historically (this no longer seems to be the case) this didn't work on Cygwin, so it instead ran /usr/bin/env tclsh filter_g < foo.txt > foo.txt.temp. The script as shown 'works' on both Linux (Tcl 8.5) and Cygwin (Tcl 8.6).
Thanks.
#!/usr/bin/env tclsh
proc cygwin_prefix { file } {
global cygwin
if {$cygwin} {
set status [catch { set fpath [eval exec which $file] } result ]
if { $status != 0 } {
puts "which error: '$result'"
exit 1
}
set file "/usr/bin/env tclsh $fpath"
}
set file
}
set cygwin 1
set filein foo.txt
set command [cygwin_prefix filter_g]
set command "$command < $filein > $filein.temp"
set status [catch { eval exec $command } result ]
if { $status != 0 } {
puts "filter error: '$result'"
exit 1
}
exit 0
The key to your question is two-fold.
If a procedure doesn't finish with return (or error, of course) the result of the procedure is the result of the last command executed in that procedure's body.
(Or the empty string, if no commands were executed. Doesn't apply in this case.)
This is useful for things like procedures that just wrap commands:
proc randomPick {list} {
lindex $list [expr { int(rand() * [llength $list]) }]
}
Yes, you could add in return […] but it just adds clutter for something so short.
The set command, with one argument, reads the named variable and produces the value inside the var as its result.
A very long time ago (around 30 years now) this was how all variables were read. Fortunately for us, the $… syntax was added which is much more convenient in 99.99% of all cases. The only place left where it's sometimes sensible is with computed variable names, but most of the time there's a better option even there too.
The form you see with set file at the end of a procedure instead of return $file had currency for a while because it produced slightly shorter bytecode. By one unreachable opcode. The difference in bytecode is gone now. There's also no performance difference, and never was (especially by comparison with the weight of exec which launches subprocesses and generally does a lot of system calls!)
It's not required to use eval for exec. Building up a command as a list will protect you from, for example, path items that contain a space. Here's a quick rewrite to demonstrate:
proc cygwin_prefix { file } {
if {$::cygwin} {
set status [catch { set fpath [exec which $file] } result]
if { $status != 0 } {
error "which error: '$result'"
}
set file [list /usr/bin/env tclsh $fpath]
}
return $file
}
set cygwin 1
set filein foo.txt
set command [cygwin_prefix filter_g]
lappend command "<" $filein ">" $filein.temp
set status [catch { exec {*}$command } result]
if { $status != 0 } {
error "filter error: '$result'"
}
This uses {*} to explode the list into individual words to pass to exec.
When I am executing a Python script and ensure that PYTHONPATH is properly set to refer depedency modules. Within the Python code I call a TCL script which again calls a Python script like below:
if {[catch {exec {*}[auto_execok python] [file nativename [file join [file dirname [info script]] my.py ]] } result] == 0 } {
puts "Executed successfully $result"
} else {
puts "Error $result"
return error
}
I am successfully able to execute the Python script my.py externally but when executed from the TCL script it gives issues. Somehow I find that it is cause the PYTHONPATH is not being passed properly while calling the Python script since my.py refers to depdency Python modules.
How can I pass the PYTHONPATH in exec command?
The PYTHONPATH is an environment variable. They're manipulated through the env global variable:
# You might be able to set this once for your whole script
set python_path {C:/Python/3.6/wherever C:/Users/me/Python/3.6/wherever}
# Transform a Tcl list into the right format that Python expects
set ::env(PYTHONPATH) [join [lmap p $python_path {file nativename $p}] \
$::tcl_platform(pathSeparator)]
# Split this out for a shorter line length. ;-)
set my_py [file join [file dirname [info script]] my.py]
if {[catch {exec {*}[auto_execok python] [file nativename $my_py]} result] == 0 } {
puts "Executed successfully $result"
} else {
puts "Error $result"
return error
}
In Tcl 8.5, you don't have lmap or the pathSeparator element of tcl_platform and instead would do something like this:
foreach p $python_path {
if {[info exist ::env(PYTHONPATH)]} {
# Assume Windows
append ::env(PYTHONPATH) ";" [file nativename $p]
} else {
set ::env(PYTHONPATH) [file nativename $p]
}
}
You could also hardcode the values if they're just one or two elements. Remember that backslashes (\) are significant to Tcl, so put the string in {…} if you're doing that.
set ::env(PYTHONPATH) {C:\Python\3.6\wherever;C:\Users\me\Python\3.6\wherever}
That's not particularly viable for anything redistributable… but works for one's own scripts.
I have a variable, let's say xx, with a list of index 0 and index 1 values. I want to modify a script (not mine) which previously defines a function, pptable, i.e.,
proc pptable {l1 l2} {
foreach i1 $l1 i2 $l2 {
puts " [format %6.2f $i1]\t[format %6.2f $i2]"
}
}
so that it displays the output into two columns using
pptable [lindex $xx 1] [lindex $xx 0]
However, I want to write the output directly to a file. Could you tell me how I can send the data to a file instead to the display?
One of the neatest ways of doing this is to stack on a channel transform that redirects stdout to where you want it to go. This works even if the write to stdout happens from C code or in a different thread as it plugs into the channel machinery. The code is a little bit long (and requires Tcl 8.6) but is reliable and actually mostly very simple.
package require Tcl 8.6; # *REQUIRED* for [chan push] and [chan pop]
proc RedirectorCallback {targetHandle op args} {
# The switch/lassign pattern is simplest way of doing this in one procedure
switch $op {
initialize {
lassign $args handle mode
# Sanity check
if {$mode ne "write"} {
close $targetHandle
error "this is just a write transform"
}
# List of supported subcommands
return {initialize finalize write}
}
finalize {
lassign $args handle
# All we need to do here is close the target file handle
close $targetHandle
}
write {
lassign $args handle buffer
# Write the data to *real* destination; this does the redirect
puts -nonewline $targetHandle $buffer
# Stop the data going to *true* stdout by returning empty string
return ""
# If we returned the data instead, this would do a 'tee'
}
default {
error "unsupported subcommand"
}
}
}
# Here's a wrapper to make the transform easy to use
proc redirectStdout {file script} {
# Stack the transform onto stdout with the file handle to write to
# (which is going to be $targetHandle in [redirector])
chan push stdout [list RedirectorCallback [open $file "wb"]]
# Run the script and *definitely* pop the transform after it finishes
try {
uplevel 1 $script
} finally {
chan pop stdout
}
}
How would we actually use this? It's really very easy in practice:
# Exactly the code you started with
proc pptable {l1 l2} {
foreach i1 $l1 i2 $l2 {
puts " [format %6.2f $i1]\t[format %6.2f $i2]"
}
}
# Demonstrate that stdout is working as normal
puts "before"
# Our wrapped call that we're capturing the output from; pick your own filename!
redirectStdout "foo.txt" {
pptable {1.2 1.3 1.4} {6.9 6.8 6.7}
}
# Demonstrate that stdout is working as normal again
puts "after"
When I run that code, I get this:
bash$ tclsh8.6 stdout-redirect-example.tcl
before
after
bash$ cat foo.txt
1.20 6.90
1.30 6.80
1.40 6.70
I believe that's precisely what you are looking for.
You can do this with less code if you use Tcllib and TclOO to help deal with the machinery:
package require Tcl 8.6
package require tcl::transform::core
oo::class create WriteRedirector {
superclass tcl::transform::core
variable targetHandle
constructor {targetFile} {
set targetHandle [open $targetFile "wb"]
}
destructor {
close $targetHandle
}
method write {handle buffer} {
puts -nonewline $targetHandle $buffer
return ""
}
# This is the wrapper, as a class method
self method redirectDuring {channel targetFile script} {
chan push $channel [my new $targetFile]
try {
uplevel 1 $script
} finally {
chan pop $channel
}
}
}
Usage example:
proc pptable {l1 l2} {
foreach i1 $l1 i2 $l2 {
puts " [format %6.2f $i1]\t[format %6.2f $i2]"
}
}
puts "before"
WriteRedirector redirectDuring stdout "foo.txt" {
pptable {1.2 1.3 1.4 1.5} {6.9 6.8 6.7 6.6}
}
puts "after"
I assume you don't want or can't modify the existing script and proc pptable, correct?
If so, there are different options, depending on your exact situation:
Redirect stdout: tclsh yourscript.tcl > your.out
Redefine puts (for a clearly defined scope):
rename ::puts ::puts.orig
proc puts args {
set fh [open your.out w];
::puts.orig $fh $args;
close $fh
}
# run pptable, source the script
This theme has been covered before, e.g., tcl stop all output going to stdout channel?
Rewire Tcl's stdout channel (not necessarily recommended):
close stdout
open your.out w
# run pptable, source the script
This has also been elaborated on before, e.g. Tracing stdout and stderr in Tcl
I have a Tcl utility that makes it easy to ensure a snippet of code run at the time control flow leaves the current scope (of the proc). It crashes in Tcl 8.6.6, so I'm wondering if there is a "better" way to implement the functionality in Tcl 8.6?
An example usage is:
proc test {file} {
set fh [open $file]
::Util::Defer [list close $fh]
# ... do a bunch of stuff
# and even if we hit an error
# [close $fh] will be evaluated as we return
# from the proc
}
It's worked great in Tcl 8.4, and I use it all over my code.
As I'm still coming up to speed on all the functionality available in Tcl 8.6, I'm asking how should the ::Util::Defer proc be written to best take advantage of Tcl 8.6?
Here is the 8.4 implementation:
namespace eval ::Util {}
proc ::Util::Defer_impl {cmd args} {
uplevel 1 $cmd
}
proc ::Util::Defer {cmd} {
set vname _u_defer_var
# look for a unique variable name
while {[uplevel 1 [list info vars $vname]] != ""} {
set vname ${vname}_
}
uplevel 1 [list set $vname $cmd]
# when the variable is unset, trigger a call to the command
uplevel 1 [list trace add variable $vname unset [list ::Util::Defer_impl $cmd]]
# return a chunk of code enabling the user to cancel this if desired
return [list variable $vname unset [list ::Util::Defer_impl $cmd]]
}
Edited to add:
I appreciate the answers. To be honest, I already have other syntactic sugar for a file handle, this:
proc test {file} {
set fh [::Util::LocalFileHandle $file]
# do stuff
}
I was just hoping more for a generic solution to the ::Util::Defer - because I occasionally have two or three uses (at different locations) in the same proc. Yes, I'm leaving out the error handling if the doesn't exist or isn't readable.
Note: I have reported the bug to ActiveState and filed a bug at core.tcl.tk.
Edited to add buggy code: This is the Tcl code that causes a crash for me, it is slightly pared down to the essence (as opposed to being the full-blown ::Util::Defer).
# ---------------begin script-------------------
package require Itcl
proc ::try_uplevel {} {
return [uplevel 1 [list ::info vars _u_defer_var]]
}
itcl::class ::test_class {
constructor {} {}
public proc test_via_proc {} {
::try_uplevel
}
}
::test_class::test_via_proc
# ---------------end script-------------------
The pattern you describe is a supported one; it shouldn't crash (and indeed I can't reproduce the crash with 8.6.3 or the tip of the 8.6 support branch). The only problem it has is that if you have an error during the close (or any other deferred script) it won't report it, as you can see from this snippet (% is prompt):
% apply {{} {
::Util::Defer [list error boo]
puts hi
}}
hi
%
This is part of why I went to quite a bit of effort to provide a try command in 8.6. With that, you can do this:
proc test {filename} {
set f [open $filename]
try {
# Do stuff with $f
} finally {
close $f
}
}
It also takes care of tricky things like stitching errors thrown inside the body and the finally clause together (the body exception info is in the -during option of the finally clause's error exception info) so that if both places error you can find out about both.
% catch {
try {
error a
} finally {
error b
}
} x y
1
% puts $x
b
% puts $y
-errorstack {INNER {returnImm b {}}} -errorcode NONE -errorinfo {b
while executing
"error b"} -errorline 5 -during {-code 1 -level 0 -errorstack {INNER {returnImm a {}}} -errorcode NONE -errorinfo {a
while executing
"error a"} -errorline 3} -code 1 -level 0
Personally, I'd be more inclined to write this:
proc withreadfile {varName filename body} {
upvar 1 $varName f
set f [open $filename]
try {
return [uplevel 1 $body]
} finally {
close $f
}
}
proc test {file} {
withreadfile fh $file {
# Do stuff with $fh
}
}
Your mileage may vary.
Untested code (this exact snippet, I've used this pattern many times):
proc test file {
try {
open $file
} on ok fh {
# do stuff with fh
# more stuff
} finally {
catch {close $fh}
}
}
should be about the same. Regardless of whether you handle errors with the try structure or not, (or whether you get errors or not) the code in the finally clause is run when it ends. If you want to be able to cancel the action, use a simple if inside the clause.
Edit
In case one wants to see any errors generated when the channel is closed, it's a bad idea to just wrap it in a catch, which is necessary if the file couldn't be opened and the channel-id variable wasn't created. Alternatives include:
Checking for existence: if {[info exists fh]} {close $fh}
Propagate the closing error: using the result and options variable name arguments to catch.
Over the weekend this heavyweight solution came to mind. It leverages the itcl::local functionality to achieve the same effect. It does depend on Itcl - but since the problem is an interaction with Itcl, that seems a reasonable solution, even though it is not purely Tcl.
itcl::class Defer_impl {
constructor {cmd} {} {
set _to_eval $cmd
}
destructor {
uplevel 1 $_to_eval
}
private variable _to_eval {}
}
proc ::Util::Defer {cmd} {
uplevel 1 [list itcl::local ::Defer_impl #auto $cmd]
}
I know this question has been asked several times here. I have looked at the responses, but couldn't figure out the way it worked.Can you please help me understand.
Here it goes:
I'm trying to source a tcl script on the tclsh command line, and I want to redirect the output of that script into a file.
$ source my_script.tcl
The script my_script.tcl is something like this:
set output_file final_result
set OUT [open $output_file w]
proc calculate{} {
<commands>
return $result
}
foreach value [calculate] {
puts $output_file "$value"
}
This script still throws out the output onto the stdout, while I expected it to redirect the output into a file specified as "final_result"
Can you please help me understand where I went wrong ?
As you've described here, your program looks OK (apart from minor obvious issues). Your problem is that calculate must not write to stdout, but rather needs to return a value. Or list of values in your case, actually (since you're stuffing them through foreach).
Thus, if you're doing:
proc calculate {} {
set result {}
puts [expr {1 + 2 * 3}]
return $result
}
Then you're going to get output written stdout and an empty final_result (since it's an empty list). If you change that to:
proc calculate {} {
set result {}
lappend result [expr {1 + 2 * 3}]
return $result
}
then your code will do as expected. That is, from puts to lappend result. This is what I recommend you do.
You can capture “stdout” by overriding puts. This is a hack!
rename puts _puts
proc puts {args} {
# Detect where we're running. IMPORTANT!
if {[info level] > 1 && [lindex [info level -1] 0] eq "calculate"} {
upvar 1 result r
lappend r [lindex $args end]
} else {
_puts {*}$args
}
return
}
I'm not convinced that the code to detect whether to capture the value is what it ought to be, but it works in informal testing. (It's also possible to capture stdout itself by a few tricks, but the least horrible — a stacked “transformation” that intercepts the channel — takes a lot more code… and the other alternatives are worse in terms of subtleties.)
Assuming that calculate doesn't write to stdout, and all the other good stuff pointed out and suggested by #DonalFellows has been done...
You need to change the puts in the main script to
puts $OUT "$value"
The script as posted writes to a channel named final_result which almost certainly doesn't exist. I'd expect an error from the puts statement inside the foreach loop.
Don't forget to close the output file - either by exiting from the tclsh interpreter, or preferrably by executing
close $OUT
before you check for anything in it,