I was writing a TCL program which looked something like this :
#!/usr/bin/tclsh
set fInp [open file1.txt r]
while {[gets $fInp line] >= 0} {
statement 1
statement 2
}
statement 3
statement 4
while {[gets $fInp line] >=0} {
statement 5
statement 6
}
close $fInp
I was expecting this to work fine , but to my surprise , the second while loop was not getting executed at all.
I came to a conclusion that we cannot read a file in TCL twice using same file descriptor (or channel)
So I closed the fInp and opened that file again using fInp2 , and it worked !
What is the reason behind this behavior , and is there any other way of doing it ?
Thanks
This is normal behavior for reading from files in every programming language and OS I'm familiar with. Once you read to the end of the file in the first loop, there's nothing left to read. You can reset and adjust the internal offset into the file's contents using the seek command, though.
seek $fInp 0 start
after the first loop will reset it to the beginning of the file so you can read it again in the second loop.
Related
I write 2 script to do somting like this:
#script1, to dump info:
proc script1 {} {
puts $file "set a 123"
puts $file "set b 456"
.....
}
(The file size I dump is 8GB)
#And use script2 to source it and do data category:
while { [get $file_wrtie_out_by_script1 line] != -1 } {
eval $line
}
close $file_wrtie_out_by_script1
Do the job....
return
In this case, the script is hang in return, how to solve the issue... stuck 3+ days, thnaks
Update:
Thanks for Colin, now I use source instead of eval, but even remove the "Do the job...", just keep return, still hang
The gets command will return the number of characters in the line that it just read from the file channel.
When all the lines of the file have been read, then gets will return -1.
Your problem is that you have a while loop that is never ending. Your while loop will terminate when gets returns 1. You need to change the condition to -1 for the while loop to terminate.
I agree with the comment from Colin that you should just use source instead of eval for each line. Using eval line-by-line will fail if you have a multi-line command (but that might not be the case in your example).
I'm not a Tcl programmer, but I need to modify a Tcl script that invokes an external command and tries to separate stdout and stderr. The following is a minimal example of how the script currently does this.
#!/usr/bin/tclsh8.4
set pipe [open "|cmd" r]
while {[gets $pipe line] >= 0} {puts $line}
catch "close $pipe" errorMsg
puts "$errorMsg"
Here, cmd is a an external command, and for the sake of this example, I will replace it with the following shell script. (I'm working on a Linux machine, but you can modify this to write to stdout and stderr however is appropriate for your system.)
#!/bin/sh -f
echo "A" > /dev/stdout
echo "B" > /dev/stdout
echo "C" > /dev/stderr
echo "D" > /dev/stderr
When I execute cmd, I get the following four lines as expected:
% ./cmd
A
B
C
D
However, when I execute my Tcl script, I get:
% ./test.tcl
A
B
D
This is an example of a more general phenomenon, which is that catch seems to swallow all but the last line of stderr.
To me, the "obvious" way to approach this is to try to mimic what is happening with stdout, which obviously works and prints all lines of the output. However, the current implementation is based on getting a Tcl channel by using open "|cmd", which requires running an external command. I can't figure out how to create a channel without opening an external command, and even if I could figure that out, there are subsequent issues with this approach. (How do I get the output of close into the channel? And if I need to open a new channel to get the output of each channel I am closing, then wouldn't I need an infinite number of channels?)
If anyone has any idea why errorMsg drops the initial lines or another approach that does not suffer from this problem, please let me know.
I know that this will come up, so I will say in advance that switching to Tcl 8.5 is probably not an option for me in the short term, since I do not control the environment in which this script is run.
I have just started learning tcl and I have a problem with reading a big file.
I have a data file which looks like the following:
420
360 360 360 3434.01913
P 6.9022 0.781399 -0.86106
C 4.36397 -0.627479 3.83363
P 6.90481 5.42772 3.08491
....
and ends like this:
P -7.21325 1.71285 -0.127325
C -4.14243 0.41123 4.67585
420
360 360 360 3210.69667
so C is the last line of one section and 420 is the start of the next section.so every 420 lines make a section of the whole file.
how can I read every section of this file and have it as like say "frame1" and do this until the end of the file (having frame2, frame3 and ...).
I have come up with a simple script just to read the whole file line by line but I do not know how to do this.Thanks
The answer to your question "how to read every section of a file using tcl?" is quite simply "keep reading until you run out of lines".
The answer to the question "how to count sections and skip header lines" is something like this:
while { ...condition... } {
if {[gets $fp line] < 0} break
lassign $line name x y z
if {$name eq "420"} {
incr section_counter
} elseif {$name in {P C}} {
# do something with the data
}
}
The condition for your while loop will be tested once for each line read. The first if command breaks out of the loop once the entire file has been read. You don't need to split the line you read unless you expect one of the lines to contain a string that isn't a proper list. Once you have assigned the fields of the line into the variables, you can look inside name to see what kind of line you got. The second if command says that if $name is the string "420", the section counter is increased. If, on the other hand, $name contains "P" or "C", you have a data line to process. If neither of these conditions are fulfilled, you have the line after a "420" line, which is simply ignored.
Documentation: break, gets, if, incr, lassign, while
I have a log which keeps on updating.
I am running a flow that generates a file. This flow runs at the background and
updates the log saying "[12:23:12:1] \m successfully completed (data_01)" .
As soon as I see this comment, i use this file for the next flow.
I created a popup saying "wait till the log says successfully completed", to avoid
script going to next flow and gets aborted.
But the problem is each and every time I need to check the log for that comment and
press OK in the popup.
Is there any way to capture the comment from the updating log.
I tried
set flag 0
while { $flag == 0} {
set fp [open "|tail code.log" r]
set data [ read $fp]
close $fp
set data [ split $data]
if { [ regexp {.*successfully completed.*} $data ]} {
set line $data
set flag 1
} else {
continue
}
}
This $line,i will pass it to the pop up variable so that instead to saying wait until
successfully completed. I will say "Successfully completed" .
But, This is throwing error as too many files opened and also its not waiting.
There's a limit on the number of files that can be opened at once by a process, imposed by the OS. Usually, if you are getting close to that limit then you're doing something rather wrong!
So let's back up a little bit.
The simplest way to read a log file continuously is to open a pipe from the program tail with the -f option passed in, so it only reports things added to the file instead of reporting the end each time it is run. Like this:
set myPipeline [open "|tail -f code.log"]
You can then read from this pipeline and, as long as you don't close it, you will only ever read a line once. Exiting the Tcl process will close the pipe. You can either use a blocking gets to read each line, or a fileevent so that you get a callback when a line is available. This latter form is ideal for a GUI.
Blocking form
while {[gets $myPipeline line] >= 0} {
if {[regexp {successfully completed \(([^()]+)\)} $line -> theFlowName]} {
processFlow $theFlowName
}
}
close $myPipeline
Callback form
Assuming that the pipeline is kept in blocking mode. Full non-blocking is a little more complex but follows a similar pattern.
fileevent $myPipeline readable [list GetOneLine $myPipeline]
proc GetOneLine {pipe} {
if {[gets $pipe line] < 0} {
# IMPORTANT! Close upon EOF to remove the callback!
close $pipe
} elseif {[regexp {successfully completed \(([^()]+)\)} $line -> theFlowName]} {
processFlow $theFlowName
}
}
Both of these forms call processFlow with the bit of the line extract from within the parentheses when that appears in the log. That's the part where it becomes not generic Tcl any moreā¦
It appears that what you want to do is monitor a file and wait without hanging your UI for a particular line to be added to the file. To do this you cannot use the asynchronous IO on the file as in Tcl files are always readable. Instead you need to poll the file on a timer. In Tcl this means using the after command. So create a command that checks the time the file was last modified and if it has been changed since you last checked it, opens the file and looks for your specific data. If the data is present, set some state variable to allow your program to continue to do the next step. If not, you just schedule another call to your check function using after and a suitable interval.
You could use a pipe as you have above but you should use asynchronous IO to read data from the channel when it becomes available. That means using fileevent
I need to store some logs in a file that can grow with every execution. A logical way would be to use a+ option when opening because using w+ would truncate the file. However, with the a+ option (Tcl 8.4) I cannot write anywhere in the file. seek works fine. I can verify that the pointer was moved using tell. But the output is always done at the tail end of the file.
Is there any way to resolve this? I.e. having the ability to seek and write in any place and also preserve the old file at the open.
In Tcl 8.5, the behavior of Tcl on Unix was changed so that the O_APPEND flag is passed to the open() system call. This causes the OS to always append the data to the file, and is inherited when the FD is passed to subprocesses; for logs, it is exactly the right thing. (In 8.4 and before, and in all versions on Windows, the behavior is simulated inside Tcl's file channel implementation, which will internally seek() to the end immediately before the write(); that obviously is subject to potential problems with race conditions when there are multiple processes logging to the same file and is definitely unsafe when the FD is passed to subprocesses.) You can manage truncation of the opened file with chan truncate (new in 8.5), which works just fine on a+-opened files.
If you do not want the seek-to-end behavior, you should not use a+ (or a). Try r+ or some combination of flags, like this:
set f [open $filename {RDWR CREAT}]
For comparison, the a+ option is now exactly the same as the flags RDWR CREAT APPEND, and not all combinations of longer flags can be described by short form flag specifiers. If you're not specifying APPEND, you'll need to do the seek $f 0 end yourself (and watch out for problems with multiple processes if you're appending to logs; that's when APPEND becomes required and exceptionally hard to correctly simulate any other way).
Open with r+ - it opens in read mode (thus not turncating the file) but allows writing as well.
See the documentation of open for more info: http://www.tcl.tk/man/tcl8.5/TclCmd/open.htm
I have verified that using the a+ option allow me to read/write anywhere in the file. However, by writing in the middle (or at the beginning) of a file, I overwrite the data there, not inserting. The following code illustrate that point:
#!/usr/bin/env tclsh
# Open the file, with truncation
set f [open foo w]
puts $f "one"
puts $f "two"
close $f
# Open again, with a+ ==> read/write/append
set f [open foo a+]
puts $f "three" ;# This goes to the end of the file
seek $f 4 ;# Seek to the beginning of the word "two"
puts $f "2.0" ;# Overwrite the word "two"
close $f
# Open and verify the contents
set f [open foo r]
puts [read $f]
close $f
Output:
one
2.0
three
If you are looking to insert in the middle of the file, you might want to look at the fileutil package, which contains the ::fileutil::insertIntoFile command.