How to close file opened via file readable in Tcl - tcl

When the input file is corrupt, the input file is held open by my code and I am unable to delete it. To delete it, I might close the command line which ran my code so the file is closed automatically then I am able to delete.
But what is the command to close a file opened via file readable?

I'm not sure if this is the best way to do it, but I think you could use something like this:
proc close_all_files {} {
foreach channel [file channels "file*"] {
close $channel
}
}
Then call close_all_files when you need to close files that have been previously opened.
Warning: this will close absolutely all files opened within the script. There are not a lot of options if you don't know the file identifier that was created by file readable, unless you modify that proc to add the file identifier to an accessible list outside the proc, the easiest example being to use a global list:
proc file'readable name {
global filesIds
set rc [catch {open $name} fp]
if {[string match "file*" $fp]} {
lappend filesIds $fp
}
if {$rc==0} {close $fp}
expr {$rc==0}
}
And then if you know the file order for which you used file readable, you can pick which one you need to close.

Related

Can we skip a specfic command while tcl is running source.?

Eg. get_abc, get_xyz command which generate collection, set_pqr receives collection are custom commands and following is tcl file
#Tcl file start
get_abc
get_xyz
set_pqr -object [get_abc]
#Tcl file end
Now the requirement is we need to skip the set_pqr command and this tcl file is big and read only, we can't change it.
Now we added this handling in set_pqr command callback to skip processing, but still get_abc command in same line get processed which is anyway to be discarded and not needed when the collection goes to set_pqr. Also we can't skip get_abc from software because its valid and can be used at other places.
Does tcl provides capability to skip the full line of set_pqr
The only ways to make Tcl do what you want are to not use standard source to load the command. Let's write our own version instead:
proc skippingSource {filename} {
set f [open $filename]
set code [read $f]
close $f
# Comment out the offending lines; this is a cheap hack BTW
set code [regsub -all -line {^set_pqr } $code "#set_pqr "]
# Override [info script] until this procedure returns
info script $filename
uplevel 1 $code
}
Now we just need to use skippingSource instead of source. We can of course call it directly, the easiest method, or we can substitute in for source:
# Keep the original in case we want to put it back
rename source originalSource
rename skippingSource source
You redefine set_pqr to a proc that does nothing:
# Back up original set_pqr
rename set_pqr set_pqr_original
# Make new proc that does nothing
proc set_pqr {args} {}
Now you can source the file and each seq_pqr command will do nothing (and takes an arbitrary number of arguments).
When you need the original command back again:
rename set_pqr_original set_pqr
This whole thing could be wrapped up in one proc too:
proc source_with_skip {filename "skip_commands {}"} {
foreach command $skip_commands {
rename $command ${command}_original
}
source $filename
foreach command $skip_commands {
rename ${command}_original $command
}
}
% source my_file.tcl set_pqr
Note that the above may not be sufficient if the procs are in different namespaces than the current namespace.

Writing multiple lines to a file in TCL

I'm looking to modify gpsfeed+ to add in a section which writes the NAV string out to a text file while the simulator is running. The tool is written in tcl and I'm at a loss as to what I need to do. What I have so far is:
if {$prefs(udp) & $::udpOn} {
# opens file to write strings to
set fp [open "input_NAV.txt" w+]
# one sentence per udp packet
foreach line [split $::out \n] {
puts $fp $line
}
close $fp
}
Right now if UDP broadcast is switched on, I want to take each NAV string broadcast over UDP and write it to a file. But the code above only writes 1 of the strings and then overwrites the string. I've been trying to add in a /n switch, but I've not had any joy.
I was using the wrong mode for opening the file:
w+ Open the file for reading and writing. Truncate it if it exists. If it does not exist, create a new file.
I should have been using either of the following:
a Open the file for writing only. If the file does not exist, create a new empty file. Set the file pointer to the end of the file prior to each write.
a+ Open the file for reading and writing. If the file does not exist, create a new empty file. Set the initial access position to the end of the file.
This would be a comment, but formatting.
This code:
foreach line [split $::out \n] {
puts $fp $line
}
Is equivalent to:
puts $fp $::out

How to check if the value of file handle is not null in tcl

I have this snippet in my script:
puts "Enter Filename:"
set file_name [gets stdin]
set fh [open $file_name r]
#Read from the file ....
close $fh
Now, this snippet asks user for a file name.. which is then set as an input file and then read. But when the file with the name $file_name doesn't exists, it shows error saying
illegal file character
How do i check if fh is not null (I don't think there is a concept of NULL in tcl being "everyting is a string" language!), so that if an invalid file_name is given, i can throw a print saying file doesn't exists!
Short answer:
try {
open $file_name
} on ok f {
# do stuff with the open channel using the handle $f
} on error {} {
error {file doesn't exist!}
}
This solution attempts to open the file inside an exception handler. If open is successful, the handler runs the code you give it inside the on ok clause. If open failed, the handler deals with that error by raising a new error with the message you wanted (note that open might actually fail for other reasons as well).
The try command is Tcl 8.6+, for Tcl 8.5 or earlier see the long answer.
Long answer:
Opening a file can fail for several reasons, including missing files or insufficient privileges. Some languages lets the file opening function return a special value indicating failure. Others, including Tcl, signal failure by not letting open return at all but instead raise an exception. In the simplest case, this means that a script can be written without caring about this eventuality:
set f [open nosuchf.ile]
# do stuff with the open channel using the handle $f
# run other code
This script will simply terminate with an error message while executing the open command.
The script doesn't have to terminate because of this. The exception can be intercepted and the code using the file handle be made to execute only if the open command was successful:
if {![catch {open nosuchf.ile} f]} {
# do stuff with the open channel using the handle $f
}
# run other code
(The catch command is a less sophisticated exception handler used in Tcl 8.5 and earlier.)
This script will not terminate prematurely even if open fails, but it will not attempt to use $f either in that case. The "other code" will be run no matter what.
If one wants the "other code" to be aware of whether the open operation failed or succeeded, this construct can be used:
if {![catch {open nosuchf.ile} f]} {
# do stuff with the open channel using the handle $f
# run other code in the knowledge that open succeeded
} else {
# run other code in the knowledge that open failed
}
# run code that doesn't care whether open succeeded or failed
or the state of the variable f can be examined:
catch {open nosuchf.ile} f
if {$f in [file channels $f]} {
# do stuff with the open channel using the handle $f
# run other code in the knowledge that open succeeded
} else {
# run other code in the knowledge that open failed
}
# run code that doesn't care whether open succeeded or failed
(The in operator is in Tcl 8.5+; if you have an earlier version you will need to write the test in another manner. You shouldn't be using earlier versions anyway, since they're not supported.)
This code checks if the value of f is one of the open channels that the interpreter knows about (if it isn't, the value is probably an error message). This is not an elegant solution.
Ensuring the channel is closed
This isn't really related to the question, but a good practice.
try {
open nosuchf.ile
} on ok f {
# do stuff with the open channel using the handle $f
# run other code in the knowledge that open succeeded
} on error {} {
# run other code in the knowledge that open failed
} finally {
catch {chan close $f}
}
# run code that doesn't care whether open succeeded or failed
(The chan command was added in Tcl 8.5 to group several channel-related commands as subcommands. If you're using earlier versions of Tcl, you can just call close without the chan but you will have to roll your own replacement for try ... finally.)
The finally clause ensures that whether or not the file was opened or any error occurred during the execution of the on ok or on error clauses, the channel is guaranteed to be non-existent (destroyed or never created) when we leave the try construct (the variable f will remain with an unusable value, unless we unset it. Since we don't know for sure if it exists, we need to prevent the unset operation from raising errors by using catch {unset f} or unset -nocomplain f. I usually don't bother: if I use the name f again I just set it to a fresh value.).
Documentation: catch, chan, close, error, in operator, file, if, open, set, try, unset
Old answer:
(This answer has its heart in the right place but I'm not satified with it these months later. Since it was accepted and even marked as useful by three people I am loath to delete it, but the answer above is IMHO better.)
If you attempt to open a non-existing file and assign the channel identifier to a variable, an error is raised and the contents of the variable are unchanged. If the variable didn't exist, it won't be created by the set command. So while there is no null value, you can either 1) set the variable to a value you know isn't a channel identifier before opening the file:
set fh {} ;# no channel identifier is the empty string
set fh [open foo.bar]
if {$fh eq {}} {
puts "Nope, file wasn't opened."
}
or 2) unset the variable and test it for existence afterwards (use catch to handle the error that is raised if the variable didn't exist):
catch {unset fh}
set fh [open foo.bar]
if {![info exists fh]} {
puts "Nope, file wasn't opened."
}
If you want to test if a file exists, the easiest way is to use the file exists command:
file exists $file_name
if {![file exists $file_name]} {
puts "No such file"
}
Documentation: catch, file, if, open, puts, set, unset

TCL- script to output a file which contains size of all the files in the directry and subdirectories

Please help me with the script which outputs the file that contains names of the files in subdirectories and its memory in bytes, the arguement to the program is the folder path .output file should be file name in 1st column and its memory in second column
Note:folder contains subfolders...inside subfolders there are files
.I tried this way
set fp [open files_memory.txt w]
set file_names [glob ../design_data/*/*]
foreach file $file_names {
puts $fp "$file [lindex [exec du -sh $file] 0]"
}
close $fp
Result sample:
../design_data/def/ip2.def.gz 170M
../design_data/lef/tsmc13_10_5d.lef 7.1M
But i want only file name to be printed that is ip2.def.gz , tsmc13_10_5d.lef ..etc(not the entirepath) and file memorry should be aligned
TCL
The fileutil package in Tcllib defines the command fileutil::find, which can recursively list the contents of a directory. You can then use foreach to iterate over the list and get the sizes of each of them with file size, before producing the output with puts, perhaps like this:
puts "$filename\t$size"
The $filename is the name of the file, and the $size is how large it is. You will have obtained these values earlier (i.e., in the line or two before!). The \t in the middle is turned into a TAB character. Replace with spaces or a comma or virtually anything else you like; your call.
To get just the last part of the filename, I'd do:
puts $fp "[file tail $file] [file size $file]"
This does stuff with the full information about the file size, not the abbreviated form, so if you really want 4k instead of 4096, keep using that (slow) incantation with exec du. (If the consumer is a program, or a programmer, writing out the size in full is probably better.)
In addition to Donal's suggestion, there are more tools for getting files recursively:
recursive_glob (from the Tclx package) and
for_recursive_glob (also from Tclx)
fileutil::findByPattern (from the fileutil package)
Here is an example of how to use for_recursive_glob:
package require Tclx
for_recursive_glob filename {../design_data} {*} {
puts $filename
}
This suggestion, in combination with Donal's should be enough for you to create a complete solution. Good luck.
Discussion
The for_recursive_glob command takes 4 arguments:
The name of the variable representing the complete path name
A list of directory to search for (e.g. {/dir1 /dir2 /dir3})
A list of patterns to search for (e.g. {*.txt *.c *.cpp})
Finally, the body of the for loop, where you want to do something with the filename.
Based on my experience, for_recursive_glob cannot handle directories that you don't have permission to (i.e. on Mac, Linux, and BSD platforms, I don't know about Windows). In which case, the script will crash unless you catch the exception.
The recursive_glob command is similar, but it returns a list of filenames instead of structuring in a for loop.

TCL : how to wait a flow till the present flow completes?

I have a log which keeps on updating.
I am running a flow that generates a file. This flow runs at the background and
updates the log saying "[12:23:12:1] \m successfully completed (data_01)" .
As soon as I see this comment, i use this file for the next flow.
I created a popup saying "wait till the log says successfully completed", to avoid
script going to next flow and gets aborted.
But the problem is each and every time I need to check the log for that comment and
press OK in the popup.
Is there any way to capture the comment from the updating log.
I tried
set flag 0
while { $flag == 0} {
set fp [open "|tail code.log" r]
set data [ read $fp]
close $fp
set data [ split $data]
if { [ regexp {.*successfully completed.*} $data ]} {
set line $data
set flag 1
} else {
continue
}
}
This $line,i will pass it to the pop up variable so that instead to saying wait until
successfully completed. I will say "Successfully completed" .
But, This is throwing error as too many files opened and also its not waiting.
There's a limit on the number of files that can be opened at once by a process, imposed by the OS. Usually, if you are getting close to that limit then you're doing something rather wrong!
So let's back up a little bit.
The simplest way to read a log file continuously is to open a pipe from the program tail with the -f option passed in, so it only reports things added to the file instead of reporting the end each time it is run. Like this:
set myPipeline [open "|tail -f code.log"]
You can then read from this pipeline and, as long as you don't close it, you will only ever read a line once. Exiting the Tcl process will close the pipe. You can either use a blocking gets to read each line, or a fileevent so that you get a callback when a line is available. This latter form is ideal for a GUI.
Blocking form
while {[gets $myPipeline line] >= 0} {
if {[regexp {successfully completed \(([^()]+)\)} $line -> theFlowName]} {
processFlow $theFlowName
}
}
close $myPipeline
Callback form
Assuming that the pipeline is kept in blocking mode. Full non-blocking is a little more complex but follows a similar pattern.
fileevent $myPipeline readable [list GetOneLine $myPipeline]
proc GetOneLine {pipe} {
if {[gets $pipe line] < 0} {
# IMPORTANT! Close upon EOF to remove the callback!
close $pipe
} elseif {[regexp {successfully completed \(([^()]+)\)} $line -> theFlowName]} {
processFlow $theFlowName
}
}
Both of these forms call processFlow with the bit of the line extract from within the parentheses when that appears in the log. That's the part where it becomes not generic Tcl any moreā€¦
It appears that what you want to do is monitor a file and wait without hanging your UI for a particular line to be added to the file. To do this you cannot use the asynchronous IO on the file as in Tcl files are always readable. Instead you need to poll the file on a timer. In Tcl this means using the after command. So create a command that checks the time the file was last modified and if it has been changed since you last checked it, opens the file and looks for your specific data. If the data is present, set some state variable to allow your program to continue to do the next step. If not, you just schedule another call to your check function using after and a suitable interval.
You could use a pipe as you have above but you should use asynchronous IO to read data from the channel when it becomes available. That means using fileevent