How to check if the value of file handle is not null in tcl - tcl

I have this snippet in my script:
puts "Enter Filename:"
set file_name [gets stdin]
set fh [open $file_name r]
#Read from the file ....
close $fh
Now, this snippet asks user for a file name.. which is then set as an input file and then read. But when the file with the name $file_name doesn't exists, it shows error saying
illegal file character
How do i check if fh is not null (I don't think there is a concept of NULL in tcl being "everyting is a string" language!), so that if an invalid file_name is given, i can throw a print saying file doesn't exists!

Short answer:
try {
open $file_name
} on ok f {
# do stuff with the open channel using the handle $f
} on error {} {
error {file doesn't exist!}
}
This solution attempts to open the file inside an exception handler. If open is successful, the handler runs the code you give it inside the on ok clause. If open failed, the handler deals with that error by raising a new error with the message you wanted (note that open might actually fail for other reasons as well).
The try command is Tcl 8.6+, for Tcl 8.5 or earlier see the long answer.
Long answer:
Opening a file can fail for several reasons, including missing files or insufficient privileges. Some languages lets the file opening function return a special value indicating failure. Others, including Tcl, signal failure by not letting open return at all but instead raise an exception. In the simplest case, this means that a script can be written without caring about this eventuality:
set f [open nosuchf.ile]
# do stuff with the open channel using the handle $f
# run other code
This script will simply terminate with an error message while executing the open command.
The script doesn't have to terminate because of this. The exception can be intercepted and the code using the file handle be made to execute only if the open command was successful:
if {![catch {open nosuchf.ile} f]} {
# do stuff with the open channel using the handle $f
}
# run other code
(The catch command is a less sophisticated exception handler used in Tcl 8.5 and earlier.)
This script will not terminate prematurely even if open fails, but it will not attempt to use $f either in that case. The "other code" will be run no matter what.
If one wants the "other code" to be aware of whether the open operation failed or succeeded, this construct can be used:
if {![catch {open nosuchf.ile} f]} {
# do stuff with the open channel using the handle $f
# run other code in the knowledge that open succeeded
} else {
# run other code in the knowledge that open failed
}
# run code that doesn't care whether open succeeded or failed
or the state of the variable f can be examined:
catch {open nosuchf.ile} f
if {$f in [file channels $f]} {
# do stuff with the open channel using the handle $f
# run other code in the knowledge that open succeeded
} else {
# run other code in the knowledge that open failed
}
# run code that doesn't care whether open succeeded or failed
(The in operator is in Tcl 8.5+; if you have an earlier version you will need to write the test in another manner. You shouldn't be using earlier versions anyway, since they're not supported.)
This code checks if the value of f is one of the open channels that the interpreter knows about (if it isn't, the value is probably an error message). This is not an elegant solution.
Ensuring the channel is closed
This isn't really related to the question, but a good practice.
try {
open nosuchf.ile
} on ok f {
# do stuff with the open channel using the handle $f
# run other code in the knowledge that open succeeded
} on error {} {
# run other code in the knowledge that open failed
} finally {
catch {chan close $f}
}
# run code that doesn't care whether open succeeded or failed
(The chan command was added in Tcl 8.5 to group several channel-related commands as subcommands. If you're using earlier versions of Tcl, you can just call close without the chan but you will have to roll your own replacement for try ... finally.)
The finally clause ensures that whether or not the file was opened or any error occurred during the execution of the on ok or on error clauses, the channel is guaranteed to be non-existent (destroyed or never created) when we leave the try construct (the variable f will remain with an unusable value, unless we unset it. Since we don't know for sure if it exists, we need to prevent the unset operation from raising errors by using catch {unset f} or unset -nocomplain f. I usually don't bother: if I use the name f again I just set it to a fresh value.).
Documentation: catch, chan, close, error, in operator, file, if, open, set, try, unset
Old answer:
(This answer has its heart in the right place but I'm not satified with it these months later. Since it was accepted and even marked as useful by three people I am loath to delete it, but the answer above is IMHO better.)
If you attempt to open a non-existing file and assign the channel identifier to a variable, an error is raised and the contents of the variable are unchanged. If the variable didn't exist, it won't be created by the set command. So while there is no null value, you can either 1) set the variable to a value you know isn't a channel identifier before opening the file:
set fh {} ;# no channel identifier is the empty string
set fh [open foo.bar]
if {$fh eq {}} {
puts "Nope, file wasn't opened."
}
or 2) unset the variable and test it for existence afterwards (use catch to handle the error that is raised if the variable didn't exist):
catch {unset fh}
set fh [open foo.bar]
if {![info exists fh]} {
puts "Nope, file wasn't opened."
}
If you want to test if a file exists, the easiest way is to use the file exists command:
file exists $file_name
if {![file exists $file_name]} {
puts "No such file"
}
Documentation: catch, file, if, open, puts, set, unset

Related

TCL : how to wait a flow till the present flow completes?

I have a log which keeps on updating.
I am running a flow that generates a file. This flow runs at the background and
updates the log saying "[12:23:12:1] \m successfully completed (data_01)" .
As soon as I see this comment, i use this file for the next flow.
I created a popup saying "wait till the log says successfully completed", to avoid
script going to next flow and gets aborted.
But the problem is each and every time I need to check the log for that comment and
press OK in the popup.
Is there any way to capture the comment from the updating log.
I tried
set flag 0
while { $flag == 0} {
set fp [open "|tail code.log" r]
set data [ read $fp]
close $fp
set data [ split $data]
if { [ regexp {.*successfully completed.*} $data ]} {
set line $data
set flag 1
} else {
continue
}
}
This $line,i will pass it to the pop up variable so that instead to saying wait until
successfully completed. I will say "Successfully completed" .
But, This is throwing error as too many files opened and also its not waiting.
There's a limit on the number of files that can be opened at once by a process, imposed by the OS. Usually, if you are getting close to that limit then you're doing something rather wrong!
So let's back up a little bit.
The simplest way to read a log file continuously is to open a pipe from the program tail with the -f option passed in, so it only reports things added to the file instead of reporting the end each time it is run. Like this:
set myPipeline [open "|tail -f code.log"]
You can then read from this pipeline and, as long as you don't close it, you will only ever read a line once. Exiting the Tcl process will close the pipe. You can either use a blocking gets to read each line, or a fileevent so that you get a callback when a line is available. This latter form is ideal for a GUI.
Blocking form
while {[gets $myPipeline line] >= 0} {
if {[regexp {successfully completed \(([^()]+)\)} $line -> theFlowName]} {
processFlow $theFlowName
}
}
close $myPipeline
Callback form
Assuming that the pipeline is kept in blocking mode. Full non-blocking is a little more complex but follows a similar pattern.
fileevent $myPipeline readable [list GetOneLine $myPipeline]
proc GetOneLine {pipe} {
if {[gets $pipe line] < 0} {
# IMPORTANT! Close upon EOF to remove the callback!
close $pipe
} elseif {[regexp {successfully completed \(([^()]+)\)} $line -> theFlowName]} {
processFlow $theFlowName
}
}
Both of these forms call processFlow with the bit of the line extract from within the parentheses when that appears in the log. That's the part where it becomes not generic Tcl any more…
It appears that what you want to do is monitor a file and wait without hanging your UI for a particular line to be added to the file. To do this you cannot use the asynchronous IO on the file as in Tcl files are always readable. Instead you need to poll the file on a timer. In Tcl this means using the after command. So create a command that checks the time the file was last modified and if it has been changed since you last checked it, opens the file and looks for your specific data. If the data is present, set some state variable to allow your program to continue to do the next step. If not, you just schedule another call to your check function using after and a suitable interval.
You could use a pipe as you have above but you should use asynchronous IO to read data from the channel when it becomes available. That means using fileevent

prevent call stack to be outputted

Is it possible to prevent call stack to be outputted when error occurred. So for example suppose:
set error [catch { [exec $interpName $tmpFileName] } err]
if { $error ne 0 } {
puts "err = $err" #<---- Here call stack is also outputted
}
So output now looks like:
error: some error message
while executing
[stack trace]
Tcl automatically builds up the call stack in the global variable errorInfo (and, since 8.5, in the -errorinfo member of the interpreter result options dictionary) but it is up to the calling code to decide what to do with it. The default behavior of tclsh is to print it out; other Tcl-hosting environments can do different things (it's usually recommended to print it out as it helps hunt down bugs; on the other hand, some programs — specifically Eggdrop — don't and it's a cause of much trouble when debugging scripts).
You take control of this for yourself by using catch in the script that's getting the original error. The easiest way to do this is to put the real code in a procedure (e.g., called main by analogy with C and C++) and then to use a little bit of driver code around the outside:
if {[catch {eval main $argv} msg]} {
puts "ERROR: $msg"
# What you're not doing at this point is:
# puts $errorInfo
exit 1
} else {
# Non-error case; adjust to taste
puts "OK: $msg"
exit 0
}
Note that in your code, this would go inside the script you write to $tmpFileName and not in the outer driver code that you showed (which is absolutely fine and needs no adjustment that I can think of).

In Tcl, what is the equivalent of "set -e" in bash?

Is there a convenient way to specify in a Tcl script to immediately exit in case any error happens? Anything similar to set -e in bash?
EDIT I'm using a software that implements Tcl as its scripting language. If for example I run the package parseSomeFile fname, if the file fname does't exist, it reports it but the script execution continues. Is there a way that I stop the script there?
It's usually not needed; a command fails by throwing an error which makes the script exit with an informative message if not caught (well, depending on the host program: that's tclsh's behavior). Still, if you need to really exit immediately, you can hurry things along by putting a trace on the global variable that collects error traces:
trace add variable ::errorInfo write {puts stderr $::errorInfo;exit 1;list}
(The list at the end just traps the trace arguments so that they get ignored.)
Doing this is not recommended. Existing Tcl code, including all packages you might be using, assumes that it can catch errors and do something to handle them.
In Tcl, if you run into an error, the script will exit immediately unless you catch it. That means you don't need to specify the like of set -e.
Update
Ideally, parseSomeFile should have return an error, but looks like it does not. If you have control over it, fix it to return an error:
proc parseSomeFile {filename} {
if {![file exists $filename]} {
return -code error "ERROR: $filename does not exists"
}
# Do the parsing
return 1
}
# Demo 1: parse existing file
parseSomeFile foo
# Demo 2: parse non-existing file
parseSomeFile bar
The second option is to check for file existence before calling parseSomeFile.

Tcl flush command returns error

I've gotten this error from Tcl flush command:
error flushing "stdout": I/O error
Any ideas why it can happen? Also, the man page doesn't say anything about flush returning errors. Can the same error come from puts?
And ultimately: what do do about it?
Thank you.
By default Tcl uses line buffering on stdout (more on this later). This means whatever you puts there gets buffered and is only output when the newline is seen in the buffer or when you flush the channel. So, yes, you can get the same error from puts directly, if a call to it is done on an unbuffered channel or if it managed to hit that "buffer full" condition so that the underlying medium is really accessed.
As to "I/O error", we do really need more details. Supposedly the stdout of your program has been redirected (or reopened), and the underlying media is inaccessible for some reason.
You could try to infer more details about the cause by inspecting the POSIX errno of the actual failing syscall — it's accessible via the global errorCode variable after a Tcl command involving such a syscall signalized error condition. So you could go like this:
set rc [catch {flush stdout} err]
if {$rc != 0} {
global errorCode
set fd [open error.log w]
puts $fd $err\n\n$errorCode
close $fd
}
(since Tcl 8.5 you can directly ask catch to return all the relevant info instead of inspecting magic variables, see the manual).
Providing more details on how you run your program or whether you reopen stdout is strongly suggested.
A note on buffering. Output buffering can be controlled using fconfigure (or chan configure since 8.5 onwards) on a channel by manipulating its -buffering option. Usually it's set to line on stdout but it can be set to full or none. When set to full, the -buffersize option can be used to control the buffer size explicitly.

how to check that file is closed

How can I check that file is closed?
For example:
set fh [open "some_test_file" "w"]
puts $fh "Something"
close $fh
Now I want to check that channel $fh is closed
The command:
file channels $fh
return nothing so I cannot use it in any condition.
You could also use something like:
proc is_open {chan} {expr {[catch {tell $chan}] == 0}}
If the close command did not return an error, then it was successful. The file channels command doesn't take an argument but just returns all the open channels so $channel in [file channels] would be a redundant test to ensure that you closed the channel. However, how about believing the non-error response of the close command?
I must correct myself - a bit of checking and it turns out the file channels command can take an optional pattern (a glob expression) of channels names to return. So the original example will work if the file is still open. You can test this in a tclsh interpreter using file channels std*. However, the close command will still return an error that can be caught if it fails to close the channel which would also allow you to potentially handle such errors (possibly retry later for some).
Why not put the channel name into a variable, make the closing code unset that variable, if [close] suceeded and in the checking code just check the variable does not exist (that is, unset)?
Note that I'm following a more general practice found in system programming: once you closed an OS file handle, it became invalid and all accesses to it are hence invalid. So you use other means to signalizing the handle is no more associated with a file.