I have a shell environment variable PATH_TO_DIR and I want to check in TCL script that the file $PATH_TO_DIR/target.txt exists.
My current solution is:
catch {exec /usr/local/bin/tcsh -c "echo $PATH_TO_DIR/target.txt" } result
if {![file exists $result]} {
puts "ERROR: the file $result is not exists"
}
I'm sure there is a more elegant way.
How can I solve it only with TCL commands?
set path_to_dir $::env(PATH_TO_DIR)
set file_name [file join $path_to_dir "target.txt"]
set native_file_name [file nativename $file_name]
if {![file exists $native_file_name]} {
puts "ERROR: the file $native_file_name does not exist"
}
Related
I'm modifying the code below, but I have no idea how it works - enlightenment welcome. The issue is that there is a proc in it (cygwin_prefix) which is meant to create a command, by either
leaving a filename unmodified, or
prepending a string to the filename
The problem is that the proc returns nothing, but the script magically still works. How? Specifically, how does the line set command [cygwin_prefix filter_g] actually manage to correctly set command?
For background, the script simply execs filter_g < foo.txt > foo.txt.temp. However, historically (this no longer seems to be the case) this didn't work on Cygwin, so it instead ran /usr/bin/env tclsh filter_g < foo.txt > foo.txt.temp. The script as shown 'works' on both Linux (Tcl 8.5) and Cygwin (Tcl 8.6).
Thanks.
#!/usr/bin/env tclsh
proc cygwin_prefix { file } {
global cygwin
if {$cygwin} {
set status [catch { set fpath [eval exec which $file] } result ]
if { $status != 0 } {
puts "which error: '$result'"
exit 1
}
set file "/usr/bin/env tclsh $fpath"
}
set file
}
set cygwin 1
set filein foo.txt
set command [cygwin_prefix filter_g]
set command "$command < $filein > $filein.temp"
set status [catch { eval exec $command } result ]
if { $status != 0 } {
puts "filter error: '$result'"
exit 1
}
exit 0
The key to your question is two-fold.
If a procedure doesn't finish with return (or error, of course) the result of the procedure is the result of the last command executed in that procedure's body.
(Or the empty string, if no commands were executed. Doesn't apply in this case.)
This is useful for things like procedures that just wrap commands:
proc randomPick {list} {
lindex $list [expr { int(rand() * [llength $list]) }]
}
Yes, you could add in return […] but it just adds clutter for something so short.
The set command, with one argument, reads the named variable and produces the value inside the var as its result.
A very long time ago (around 30 years now) this was how all variables were read. Fortunately for us, the $… syntax was added which is much more convenient in 99.99% of all cases. The only place left where it's sometimes sensible is with computed variable names, but most of the time there's a better option even there too.
The form you see with set file at the end of a procedure instead of return $file had currency for a while because it produced slightly shorter bytecode. By one unreachable opcode. The difference in bytecode is gone now. There's also no performance difference, and never was (especially by comparison with the weight of exec which launches subprocesses and generally does a lot of system calls!)
It's not required to use eval for exec. Building up a command as a list will protect you from, for example, path items that contain a space. Here's a quick rewrite to demonstrate:
proc cygwin_prefix { file } {
if {$::cygwin} {
set status [catch { set fpath [exec which $file] } result]
if { $status != 0 } {
error "which error: '$result'"
}
set file [list /usr/bin/env tclsh $fpath]
}
return $file
}
set cygwin 1
set filein foo.txt
set command [cygwin_prefix filter_g]
lappend command "<" $filein ">" $filein.temp
set status [catch { exec {*}$command } result]
if { $status != 0 } {
error "filter error: '$result'"
}
This uses {*} to explode the list into individual words to pass to exec.
When I am executing a Python script and ensure that PYTHONPATH is properly set to refer depedency modules. Within the Python code I call a TCL script which again calls a Python script like below:
if {[catch {exec {*}[auto_execok python] [file nativename [file join [file dirname [info script]] my.py ]] } result] == 0 } {
puts "Executed successfully $result"
} else {
puts "Error $result"
return error
}
I am successfully able to execute the Python script my.py externally but when executed from the TCL script it gives issues. Somehow I find that it is cause the PYTHONPATH is not being passed properly while calling the Python script since my.py refers to depdency Python modules.
How can I pass the PYTHONPATH in exec command?
The PYTHONPATH is an environment variable. They're manipulated through the env global variable:
# You might be able to set this once for your whole script
set python_path {C:/Python/3.6/wherever C:/Users/me/Python/3.6/wherever}
# Transform a Tcl list into the right format that Python expects
set ::env(PYTHONPATH) [join [lmap p $python_path {file nativename $p}] \
$::tcl_platform(pathSeparator)]
# Split this out for a shorter line length. ;-)
set my_py [file join [file dirname [info script]] my.py]
if {[catch {exec {*}[auto_execok python] [file nativename $my_py]} result] == 0 } {
puts "Executed successfully $result"
} else {
puts "Error $result"
return error
}
In Tcl 8.5, you don't have lmap or the pathSeparator element of tcl_platform and instead would do something like this:
foreach p $python_path {
if {[info exist ::env(PYTHONPATH)]} {
# Assume Windows
append ::env(PYTHONPATH) ";" [file nativename $p]
} else {
set ::env(PYTHONPATH) [file nativename $p]
}
}
You could also hardcode the values if they're just one or two elements. Remember that backslashes (\) are significant to Tcl, so put the string in {…} if you're doing that.
set ::env(PYTHONPATH) {C:\Python\3.6\wherever;C:\Users\me\Python\3.6\wherever}
That's not particularly viable for anything redistributable… but works for one's own scripts.
I am trying to put a large file onto a WebSphere MQ queue through a TCL script. Following is what I have at the moment:
exec sh -c "echo $msg | qmqsput targetQueue queueManager
However I run into the following error:
Couldn't execute "sh": argument list too long
My message is very large and is larger than the max argument length. How can I tackle this problem?
There's no need to go via sh, so just try this:
exec qmqsput targetQueue queueManager << $msg
You could write the message to a file first.
set fn tmsg[pid].txt
set fh [open $fn w]
puts $fh $msg
close $fh
exec sh -c "cat $fn | qmqsput ..."
catch { file delete $fn }
I am programming TCL for Cisco TCL/IVR inside voice gateway.
I have some text files with public holidays dates, which I need to open on the primary server. If an error is returned while opening the file, I want to try to use a secondary backup server.
code:
if [catch {set fd [open $filename] } errmsg] {
error "Unable to open the file: $filename on Primary Server \n $errmsg"
set filename $httpsvr2$textfile
if [catch {set fd [open $filename] } errmsg] {
error "Unable to open the file: $filename on Primary Server \n $errmsg"
set Read 0
}
else {
set Read 1
}
}
else {
set Read 1
}
I was trying to use the Read flag; if it is 1, then I will search inside the file. If it is 0, it's because the file couldn't be opened on any of the servers, so I will just treat the call as if it's a working (non-holiday) day.
However, in my current code when the first attempt to open the file fails, it automatically stops executing the script.
How could I continue executing after the first error? Should I make a procedure and return values like -1? If so, how could I do that?
The command error exits the script (meaning once error is reached, you could say that execution stops). You would probably be better off by putsing the error message through stderr or a more suitable channel:
puts stderr "Unable to open the file: $filename on Primary Server \n $errmsg"
I would make it a procedure like you are thinking
proc openFils { filename httpsvr2 textfile ) {
set fd -1
foreach f $filename $httpsvr2$textfile {
if { ! [file exists $f] } {
puts stderr "File $f not on system"
}
if [catch {set fd [open $f] } errmsg] {
puts stderr "Unable to open the file: $f \n $errmsg"
} else {
break
}
}
return $fd
}
Now you can perform an operation on the file handle 'fd', note: the above script will open and return the first file it can, not both.
I am trying to execute program which has some options, and take as an input txt file. So I have try this:
set myExecutable [file join $::env(path_to_the_program) bin executable_name]
if { ![file exists $myExecutable ] } {
puts "error"
}
if { ![file executable $myExecutable ] } {
puts "error"
}
set arguments [list -option1 -option2]
set status [catch { exec $myExecutable $arguments $txtFileName } output]
if { $status != 0 } {
puts "output = $output"
}
So it's print:
output = Usage: executable_name -option1 -option2 <txt_file_name>
child process exited abnormally
You didn't actually provide the arguments to you executable. Just the textFileName. Try:
set status [catch {exec $myExecutable -option1 -option2 $txtFileName} output]
or if you prefer to keep the arguments in a list:
set status [catch {exec $myExecutable {*}$arguments} output]
where the {*} syntax will cause the list to be expanded in place. In Tcl versions before this was added (8.5) you would use:
set status [catch {eval exec [list $myExecutable] $arguments} output]
where the eval command unwraps the lists so that exec sees a single set of arguments. Adding the extra [list] statement around your $myExecutable protects it's contents against being treated as a list by the interpreter pass.