I want to monitor a log file that is continuously updated by another program. I want to read the log file every 10 mins, how to realize it? And is it possible to just read updated contents every time?
Assuming that the log file is only being appended to, you can simply save where you are before closing it and restore that location back when you reopen. Saving the location is done with chan tell and restoring the location is done with chan seek.
proc processLogLine {line} {
# Write this yourself...
puts "got log line '$line'"
}
proc readLogTail {logfile position} {
# Read the tail of the log file from $position, noting where we got to
set f [open $logfile]
chan seek $f $position
set tail [read $f]
set newPosition [chan tell $f]
close $f
# If we read anything at all, handle the log lines within it
if {$newPosition > $position} {
foreach line [split $tail "\n"] {
processLogLine $line
}
}
return $newPosition
}
proc readLogEvery10Minutes {logfile {position 0}} {
set newPosition [readLogTail $logfile $position]
set tenMinutesInMillis [expr {10 * 60 * 1000}]
after $tenMinutesInMillis [list readLogEvery10Minutes $logfile $newPosition]
}
readLogEvery10Minutes /tmp/example.log
vwait forever
Note the vwait forever at the end; that runs the event loop so that timer callbacks scheduled with after can actually be run. If you've already got an event loop going elsewhere (e.g., because this is a Tk application) then you don't need the vwait forever.
Related
Question is extension of what is answered in link. While trying to use it to print output with delay, cat file_name on the shell doesn't display the content of the file during the delay time using after. Here's the code:
proc foo {times} {
while {$times >0} {
puts "hello$times"
incr times -1
after 20000
puts "hello world"
}
}
proc reopenStdout {file} {
close stdout
open $file w ;# The standard channels are special
}
reopenStdout ./bar
foo 10
The data you're writing is being buffered in memory and you're not writing enough of it to flush the internal buffer to disk. Add a flush stdout inside the loop in foo, or do something like set up the newly opened channel to be line-buffered:
proc reopenStdout {file} {
close stdout
set ch [open $file w] ;# The standard channels are special
chan configure $ch -buffering line
}
You can play with chan configure's -buffering and -buffersize options to get the behavior that works best for your needs if line buffering isn't enough.
Hi I am using this piece of code for inserting pipe in TCL. Can anybody please let me understand when this condition [gets $pipe line] >= 0 fails.
For eg: only when [gets $pipe line] is a negative number this will fail.
In my case it is never returning a negative number and the TestEngine hangs forever
set pipeline [open "|Certify.exe filename" "r+"]
fileevent $pipeline readable [list handlePipeReadable $pipeline]
fconfigure $pipeline -blocking 0
proc handlePipeReadable {pipe} {
if {[gets $pipe line] >= 0} {
# Managed to actually read a line; stored in $line now
} elseif {[eof $pipe]} {
# Pipeline was closed; get exit code, etc.
if {[catch {close $pipe} msg opt]} {
set exitinfo [dict get $opt -errorcode]
} else {
# Successful termination
set exitinfo ""
}
# Stop the waiting in [vwait], below
set ::donepipe $pipe
} else {
puts ""
# Partial read; things will be properly buffered up for now...
}
}
vwait ::donepipe
The gets command (when given a variable to receive the line) returns a negative number when it is in a minor error condition. There are two such conditions:
When the channel has reached end-of-file. After the gets the eof command (applied to the channel) will report a true value in this case.
When the channel is blocked, i.e., when it has some bytes but not a complete line (Tcl has internal buffering to handle this; you can get the number of pending bytes with chan pending). You only see this when the channel is in non-blocking mode (because otherwise the gets will wait indefinitely). In this case, the fblocked command (applied to the channel) will return true.
Major error conditions (such as the channel being closed) result in Tcl errors.
If the other command only produces partial output or does something weird with buffering, you can get an eternally blocked pipeline. It's more likely with a bidirectional pipe, such as you're using, as the Certify command is probably waiting for you to close the other end. Can you use it read-only? There are many complexities to interacting correctly with a process bidirectionally! (For example, you probably want to make the pipe's output buffering mode be unbuffered, fconfigure $pipeline -buffering none.)
Please find the way the certify process is being triggered from the command prompt and the print statements are given just for the understanding. At the end the process hangs and the control is not transferred back to the TCL
From the documentation for gets:
If varName is specified and an empty string is returned in varName because of end-of-file or because of insufficient data in nonblocking mode, then the return count is -1.
Your script is working completely fine. checked with set pipeline [open "|du /usr" "r+"]
instead of your pipe and included puts "Line: $line" to check the result. So its clear that there is some problem in Certify command. Can you share your command, how do you use on terminal and how did you use with exec?
################### edited by Drektz
set pipeline [open "|du /usr" "r+"]
fileevent $pipeline readable [list handlePipeReadable $pipeline]
fconfigure $pipeline -blocking 0
proc handlePipeReadable {pipe} {
if {[gets $pipe line] >= 0} {
# Managed to actually read a line; stored in $line now
################### included by Drektz
puts "Line: $line"
} elseif {[eof $pipe]} {
# Pipeline was closed; get exit code, etc.
if {[catch {close $pipe} msg opt]} {
set exitinfo [dict get $opt -errorcode]
} else {
# Successful termination
set exitinfo ""
}
# Stop the waiting in [vwait], below
set ::donepipe $pipe
} else {
puts ""
# Partial read; things will be properly buffered up for now...
}
}
vwait ::donepipe
you can see it in the CMDwith& screenshot mentioned.Is there any workaround that I have to overcome the issue
Please see the issue when run with the help of exec executable args &
EDIT: Original example and alternative solution framework modified for clarity.
The line buffering behaviour might behave differently than expected in Tcl 8.6. The following code blocks without any output, unless the "chan close" line is uncommented:
set data {one two four}
set stream [open |[list cat -n] r+]
chan configure $stream -buffering line
chan puts $stream "$data\n"
chan puts $stream "\n"
chan flush $stream
#chan close $stream write
set out [chan read $stream]
puts "output: $out"
chan close $stream
So this simplistic solution does not work for interactive I/O, and this might be related to synchronization problems at both ends of the pipe.
Using a channel event structure (e.g., based on http://www.beedub.com/book/2nd/event.doc.html), seems to be preferable:
proc chanReader { pipe } {
global extState
while 1 {
set len [chan gets $pipe line]
if { $len > 0 } {
puts "<< $line."
continue
} else {
if { [chan blocked $pipe] } {
set extState 1
return
} elseif { [chan eof $pipe] } {
set extState 2
return
}
}
}
}
set data {one two foure}
set timeout 5000
#set stream [open [list | cat -n] r+]
#set stream [open [list | ispell -a] r+]
set stream [open [list | tr a-z A-Z] r+]
#set stream [open [list | fmt -] r+]
chan configure $stream -blocking 0 -buffering line
set extState 0
chan event $stream readable [list chanReader $stream]
foreach word $data {
puts "> $word\n"
chan puts $stream "$word\n"
chan flush $stream
#chan close $stream write
set aID [after $timeout {set extState 3}]
vwait extState
if { $extState == 1 } {
# Got regular output.
after cancel $aID
puts "Cancel $aID."
continue
} elseif { $extState == 2 } {
puts "External program closed."
chan close $stream
exit 2
} elseif { $extState == 3 } {
puts "Timeout."
chan close $stream
exit 3
}
}
puts "End of task."
chan close $stream
exit 0
This code fragment works with the "cat -n" and "ispell -a" external programs (commented lines), but still fails with other external programs. For instance it does not work with the "tr a-z A-Z" and "fmt" examples above.
If the line "chan close $stream write" above is uncommented, we receive output from the external program, but this terminates the interaction with it. How to reliably connect (interactively) to these external programs?
I'm guessing that the core issue here is that there's two sources of buffering going on, and Tcl only has control over one of them. But both stem from the fact that virtually all output, when not going to an “interactive” destination (i.e., a terminal), is buffered. There's basically a call in the C standard library that determines this and enables the buffering feature, and Tcl follows that rule too (despite using its entirely independent I/O library). Doing this massively speeds up non-interactive pipeline processing, but means that if you're expecting to see every byte output exactly at the point when the program thinks it is writing it, you're going to be disappointed.
Of course, programs can switch this buffering off if they want. In Tcl, this is done by fconfigure $channel -buffering none (or line for line-oriented buffering). In cat, the -n option makes it do the equivalent (calling setvbuf() in C) and ispell is probably doing the same. But most programs don't. Some instead call fflush() from time to time; that works too, but is also a minority practice. So with a bidirectional pipeline such as you're using, you can easily force the side where you feed into it from Tcl not buffer, but you can't usually get the other side to do the same.
There is a workaround: run the subprocess with Expect. That puts a fake terminal between Tcl and the subprocess (instead of a pipe) and tricks it into thinking it is talking direct to the user. But the consequence of this is that you have to substantially rewrite your Tcl program and you gain a dependency on a (very fine!) external package.
Currently I am firing following command
set pid [exec make &]
set term_id [wait pid]
First command will execute makefile inside TCL and Second Command will wait for first command's makefile operation to complete. First command displays all logs of makefile on stdout. Is it possible to store all logs in variable or file when "&" is given in the last argument of exec using redirection or any other method?
If "&" is not given then we can take the output using,
set log [exec make]
But if "&" is given then command will return process id,
set pid [exec make &]
So is it possible stop the stdout logs and put them in variable?
If you are using Tcl 8.6, you can capture the output using:
lassign [chan pipe] reader writer
set pid [exec make >#$writer &]
close $writer
Don't forget to read from the $reader or the subprocess will stall. Be aware that when used in this way, the output will be delivered fully-buffered, though this is more important when doing interactive work. If you want the output echoed to standard out as well, you will need to make your script do that. Here's a simple reader handler.
while {[gets $reader line] >= 0} {
lappend log $line
puts $line
}
close $reader
Before Tcl 8.6, your best bet would be to create a subprocess command pipeline:
set reader [open |make]
If you need the PID, this can become a bit more complicated:
set reader [open |[list /bin/sh -c {echo $$; exec make}]]
set pid [gets $reader]
Yes, that's pretty ugly…
[EDIT]: You're using Tk, in Tcl 8.5 (so you need the open |… pipeline form from above), and so you want to keep the event loop going. That's fine. That's exactly what fileevent is for, but you have to think asynchronously.
# This code assumes that you've opened the pipeline already
fileevent $reader readable [list ReadALine $reader]
proc ReadALine {channel} {
if {[gets $channel line] >= 0} {
HandleLine $line
} else {
# No line could be read; must be at the end
close $channel
}
}
proc HandleLine {line} {
global log
lappend log $line; # Or insert it into the GUI or whatever
puts $line
}
This example does not use non-blocking I/O. That might cause an issue, but probably won't. If it does cause a problem, use this:
fconfigure $reader -blocking 0
fileevent $reader readable [list ReadALine $reader]
proc ReadALine {channel} {
if {[gets $channel line] >= 0} {
HandleLine $line
} elseif {[eof $channel]} {
close $channel
}
}
proc HandleLine {line} {
global log
lappend log $line
puts $line
}
More complex and versatile versions are possible, but they're only really necessary once you're dealing with untrusted channels (e.g., public server sockets).
If you'd been using 8.6, you could have used coroutines to make this code look more similar to the straight-line code I used earlier, but they're a feature that is strictly 8.6 (and later, once we do later versions) only as they depend on the stack-free execution engine.
The AUT creates logs for a particular function run and appends the log in a central file.
The line to search in this file is:
LatestTimeStamp>MyFunction SomeStep timeLapsed SOME_TIME_VALUE
Every time the log is generated by AUT, fresh multiple logs of similar pattern are generated as above and its required to extract these fresh logs.
The simple approach I am using is:
class structure
itcl::class clsLogs {
variable _oldTimeStamp ""
variable _logRec
variable _runCtr 0
method _extractInfoForRun {runType} {
#read log
catch {close $fp}
set log [read [set fp [open [file join [file normalize $env(APPDATA)] Logs Action.log]]]]
#garbage everything before old time stamp and collect all fresh log
if {[info exists _oldTimeStamp] && $_oldTimeStamp!=""} {
regsub [subst -nobackslashes -nocommands {.*$_oldTimeStamp[^\n]*\n}] [set freshLog $log] "" freshLog
}
#increment run counter for this run
incr _runCtr
#get all fresh entry lines for reporting timelapsed for different steps of MyFunction in this run
set freshEntries [regexp -inline -all [subst -nocommands -nobackslashes {[^\n]*MyFunction[^\n]*timeLapsed[^\n]*}] $freshLog]
#iterate and collect time lapsed info for each step of MyFunction for this run
foreach ent $freshEntries {
regexp {(.*?)>.*>>MyFunction\s+(.*)\s+timeLapsed\s+(.*)$} $ent -> timeStamp runStep lapsedTime ;
puts ************runTyp>$runTyp***********\n\t$ent\n\ttimeStamp->$timeStamp\nlapsedTime->$lapsedTime
set _logRec(MyFunction_Run-$_runCtr:$runStep,lapsedTime) $lapsedTime
}
#reset old time stamp variable for next run
set _oldTimeStamp $timeStamp
}
}
But this file could be huge and storing everything in one read output variable could result in overflow:
set log [read [set fp [open [file join [file normalize $env(APPDATA)] Logs Action.log]]]]
Is it somehow possible to use a combination to get the current position of the file pointer and use it to offset to last cursor position and then start reading each time from that position?
What are the Tcl command options for the same?
so this does it:
seek [set fp [open $file]] $_fOffset
set txt [read $fp]
set _fOffset [tell $fp]
In context:
::itcl::class clsLogs {
private {
variable _fOffset 0
}
public {
method _fFreshRead {file args} {
set options(-resetOffSet) false
array set options $args
if {$options(-resetOffSet)} {
set _fOffset 0
}
seek [set fp [open $file]] $_fOffset
set txt [read $fp]
set _fOffset [tell $fp]
close $fp
return $txt
}
}
}