To parse a log file I want to do something like this
tail the file
after some time write parsed data and do other things
Here is my (sample) script
#!/usr/bin/tclsh
proc readfile {fd} {
while {[gets $fd line] >= 0} {
puts $line
}
}
proc writefile {} {
puts xxx
flush stdout
}
if {$::argc > 0} {
set fd [open [list | tail --follow=name --retry [lindex $argv 0] 2>#1]]
} else {
set fd stdin
}
after 3000 writefile
fileevent $fd readable [list readfile $fd]
vwait done
close $fd
Tailing works fine but the script for after isn't triggered.
Any idea what I'm doing wrong?
In the readfile proc, you are using a while which causing it to get stuck in it and that is why the after is not triggered.
#!/usr/bin/tclsh
proc readfile {fd} {
global done
puts "READ FILE CALLED..."
gets $fd line; # Removed 'while' loop here
puts "->>>>$line<<<<<<<<<"
### Your condition to exit the event handler####
### set done 1; #### Changing 'done' value to 1 after that condition ####
### So that the event handler will exit ####3
}
proc writefile {} {
puts "WRITE FILE CALLED"
puts xxx
flush stdout
}
if {$::argc > 0} {
set fd [open [list | tail --follow=name --retry [lindex $argv 0] 2>#1]]
} else {
set fd stdin
}
after 3000 writefile
fileevent $fd readable [list readfile $fd]
vwait done
close $fd
Output :
dinesh#dinesh-VirtualBox:~/pgms/tcl$ ./ubi.tcl
WRITE FILE CALLED
xxx
ubi
READ FILE CALLED...
->>>>ubi<<<<<<<<<
cool
READ FILE CALLED...
->>>>cool<<<<<<<<<
working
READ FILE CALLED...
->>>>working <<<<<<<<<
Related
I need to redirect output of a proc to a file. The "redirect" command isn't working for the Tcl interpreter that my tool uses. So I'm trying "exec echo [proc_name]" instead, which was suggested in one of the threads on this site. But this doesn't work, the file ${dump_dir}/signal_list.txt comes out empty,
proc dump_signals {{dump_dir "."}} {
upvar build build
puts "Dumping signals.."
set my_file [open ${dump_dir}/signal_list.txt w]
exec echo [get_signals] > $my_file
}
'get_signals' is a proc, which calls another proc,
proc puts_name_and_value {name} {
set value [value %h $name]
puts "$name $value"
}
proc get_signals {} {
# Get list of signals
set signal_list {test.myreg test.myreg2}
foreach signal $signal_list {
puts_name_and_value $signal
}
}
My workaround for now is this, writing to the file in the bottom level proc by upvar'ing the file variable. This works but isn't the most clean way of doing this. Please let me know how to cleanly redirect the output of a proc to a file.
proc puts_name_and_value {name} {
upvar my_file my_file
set value [value %h $name]
puts $my_file "$name $value"
#puts "$name $value"
}
proc get_signals {} {
upvar my_file my_file
# Get list of signals
set signal_list {test.myreg test.myreg2}
foreach signal $signal_list {
puts_name_and_value $signal
}
}
proc dump_signals {{dump_dir "."}} {
upvar build build
puts "Dumping signals.."
set my_file [open ${dump_dir}/signal_list.txt w]
get_signals
}
Your dump_signals proc writes to standard output, and doesn't return anything. So of course trying to use a shell to redirect its output to a file isn't going to result in anything in the file.
One solution is to use tcl's transchan API with chan push and chan pop to temporarily override what stdout goes to. Example:
#!/usr/bin/env tclsh
proc redirector {fd args} {
switch -- [lindex $args 0] {
initialize {
# Sanity check
if {[lindex $args 2] ne "write"} {
error "Can only redirect an output channel"
}
return {initialize write finalize}
}
write {
puts -nonewline $fd [lindex $args 2]
}
finalize {
close $fd
}
}
}
proc writer_demo {} {
puts "line one"
puts "line two"
}
proc main {} {
chan push stdout [list redirector [open output.txt w]]
writer_demo
chan pop stdout
}
main
Running this script will produce a file output.txt with the contents of writer_demo's puts calls instead of having them go to standard output like normal.
You could also just pass the file handle to write to as an argument to your functions (Instead of using upvar everywhere):
proc puts_name_and_value {out name} {
set value [value %h $name]
puts $out "$name $value"
}
proc get_signals {{out stdout}} {
# Get list of signals
set signal_list {test.myreg test.myreg2}
foreach signal $signal_list {
puts_name_and_value $out $signal
}
}
proc dump_signals {{dump_dir "."}} {
upvar build build
puts "Dumping signals.."
set my_file [open ${dump_dir}/signal_list.txt w]
get_signals $my_file
}
I am trying to write an expect script that reacts to input from reading a pipe. Consider this example in file "contoller.sh":
#!/usr/bin/env expect
spawn bash --noprofile --norc
set timeout 3
set success 0
send "PS1='Prompt: '\r"
expect {
"Prompt: " { set success 1 }
}
if { $success != 1 } { exit 1 }
proc do { cmd } {
puts "Got command: $cmd"
set success 0
set timeout 3
send "$cmd\r"
expect {
"Prompt: " { set success 1 }
}
if { $success != 1 } { puts "oops" }
}
set cpipe [open "$::env(CMDPIPE)" r]
fconfigure $cpipe -blocking 0
proc read_command {} {
global cpipe
if {[gets $cpipe cmd] < 0} {
close $cpipe
set cpipe [open "$::env(CMDPIPE)" r]
fconfigure $cpipe -blocking 0
fileevent $cpipe readable read_command
} else {
if { $cmd == "exit" } {
exp_close
exp_wait
exit 0
} elseif { $cmd == "ls" } {
do ls
} elseif { $cmd == "pwd" } {
do pwd
}
}
}
fileevent $cpipe readable read_command
vwait forever;
Suppose you do:
export CMDPIPE=~/.cmdpipe
mkfifo $CMDPIPE
./controller.sh
Now, from another terminal try:
export CMDPIPE=~/.cmdpipe
echo ls >> ${CMDPIPE}
echo pwd >> ${CMDPIPE}
In the first terminal the "Got command: ls/pwd" lines are printed immediately as soon as you press enter on each echo command, but there is no output from the spawned bash shell (no file listing and current directory). Now, try it once more:
echo ls >> ${CMDPIPE}
Suddenly output from the first two commands appears but 3rd command (second ls) is not visible. Keep going and you will notice that there is a "lag" in displayed output which seems to be "buffered" and then dumped at once later.
Why is this happening and how can I fix it?
According to fifo(7):
Normally, opening the FIFO blocks until the other end is opened also.
So, in the proc read_command, it's blocking on set cpipe [open "$::env(CMDPIPE)" r] and does not get the chance to display the spawned process's output until you echo ... >> ${CMDPIPE} again.
To work it around, you can open the FIFO (named pipe) in non-blocking mode:
set cpipe [open "$::env(CMDPIPE)" {RDONLY NONBLOCK} ]
This is also mentioned in fifo(7):
A process can open a FIFO in nonblocking mode. In this case, opening for read-only will succeed even if no one has opened on the write side yet ...
The following is the simplified version of your code and it works fine for me (tested on Debian 9.6).
spawn bash --norc
set timeout -1
expect -re {bash-[.0-9]+[#$] $}
send "PS1='P''rompt: '\r"
# ^^^^
expect "Prompt: "
proc do { cmd } {
send "$cmd\r"
if { $cmd == "exit" } {
expect eof
exit
} else {
expect "Prompt: "
}
}
proc read_command {} {
global cpipe
if {[gets $cpipe cmd] < 0} {
close $cpipe
set cpipe [open cpipe {RDONLY NONBLOCK} ]
fileevent $cpipe readable read_command
} else {
do $cmd
}
}
set cpipe [open cpipe {RDONLY NONBLOCK} ]
fileevent $cpipe readable read_command
vwait forever
# Prints the string in a file
puts $chan stderr "$timestamp - Running test: $test"
# Prints the string on a console
puts "$timestamp - Running test: $test"
Is there a way I can send the output of puts to the screen and to a log file at the same time? Currently I have both the above two lines one after the other in my script to achieve this.
Or is there any other solution in tcl ?
Use the following proc instead of puts:
proc multiputs {args} {
if { [llength $args] == 0 } {
error "Usage: multiputs ?channel ...? string"
} elseif { [llength $args] == 1 } {
set channels stdout
} else {
set channels [lrange $args 0 end-1]
}
set str [lindex $args end]
foreach ch $channels {
puts $ch $str
}
}
Examples:
# print on stdout only
multiputs "1"
# print on stderr only
multiputs stderr "2"
set brieflog [open brief.log w]
set fulllog [open detailed.log w]
# print on stdout and in the log files
multiputs stdout $brieflog $fulllog "3"
This isn't something I've used extensively, but it seems to work (Tcl 8.6+ only):
You need the channel transform tcl::transform::observe package:
package require tcl::transform::observe
Open a log file for writing and set buffering to none:
set f [open log.txt w]
chan configure $f -buffering none
Register stdout as a receiver:
set c [::tcl::transform::observe $f stdout {}]
Anything written to the channel $c will now go to both the log file and stdout.
puts $c foobar
Note that it would seem to make more sense to have the channel transformation on top of stdout, with the channel to the log file as receiver, but I haven't been able to make that work.
Documentation:
chan,
open,
package,
puts,
set,
tcl::transform::observe (package)
I have a question regarding named pipes in tcl.
First I created the pipe with mkfifo:
mkfifo foo
Then execute the following tcl script:
set fifo [open "foo" r]
fconfigure $fifo -blocking 1
proc read_fifo {} {
global fifo
puts "calling read_fifo"
gets $fifo x
puts "x is $x"
}
puts "before file event"
fileevent $fifo readable read_fifo
puts "after file event"
When i run the tcl script it waits for an event without outputting anything.
Then, when I write to the fifo:
echo "hello" > foo
Now, the tcl scripts prints out :
before file event
after file event
Why is 'read_fifo' function call not getting triggered here ?
Could anyone help me in understanding this behaviour.
fileevent relies on the the eventloop, which you don't enter.
fileevent just tells Tcl to call read_fifo when it is readable.
If you want blocking IO, then just call gets. This blocks until an entire line has been read.
set fifo [open "foo" r]
fconfigure $fifo -blocking 1
gets $fifo x
puts "x is $x"
If you do it event-driven, you need fileevent, use non-blocking IO and you have to enter the event-loop (e.g. with vwait forever).
set fifo [open "foo" r]
fconfigure $fifo -blocking 0
proc read_fifo {fifo} {
puts "calling read_fifo"
if {[gets $fifo x] < 0} {
if {[eof $fifo]} {
# Do some cleanup here.
close $fifo
}
}
puts "x is $x"
}
fileevent $fifo readable [list read_fifo $fifo]
vwait forever; #enter the eventloop
Don't mix event-driven with blocking IO. This does not really work.
Note that you don't have to call vwait in Tk, doing so would reenter the event-loop, which is considered bad practice.
I am calling a proc through fileevent. that proc returns a line od data.
how to receive this data?
the following code I have written to receive data from pipe when ever data is available. I dont want to block by using direct gets.
proc GetData1 { chan } {
if {[gets $chan line] >= 0} {
return $line
}
}
proc ReadIO {chan {timeout 2000} } {
set x 0
after $timeout {set x 1}
fconfigure $chan -blocking 0
fileevent $chan readable [ list GetData1 $chan ]
after cancel set x 3
vwait x
# Do something based on how the vwait finished...
switch $x {
1 { puts "Time out" }
2 { puts "Got Data" }
3 { puts "App Cancel" }
default { puts "Time out2 x=$x" }
}
# How to return data from here which is returned from GetData1
}
ReadIO $io 5000
# ** How to get data here which is returned from GetData1 ? **
There are probably as many ways of doing this as there are Tcl programmers. Essentially, you shouldn't use return to pass the data back from your fileevent handler as it isn't called in the usual way so you can get at what it returns.
Here are a few possible approaches.
Disclaimer None of these is tested, and I'm prone to typing mistakes, so treat with a little care!
1) Get your fileevent handler to write to a global veriable:
proc GetData1 {chan} {
if {[gets $chan line]} >= 0} {
append ::globalLine $line \n
}
}
.
.
.
ReadIO $io 5000
# ** The input line is in globalLine in the global namespace **
2) Pass the name of a global variable to your fileevent handler, and save the data there
proc GetData2 {chan varByName} {
if {[gets $chan line]} >= 0} {
upvar #0 $varByName var
append var $line \n
}
}
fileevent $chan readable [list GetData1 $chan inputFromChan]
.
.
.
ReadIO $chan 5000
# ** The input line is in ::inputFromChan **
A good choice for the variable here might be an array indexed by $chan, e.g. fileevent $chan readable [list GetDetail input($chan)]
3) Define some kind of class to look after your channels that stashes the data away internally and has a member function to return it
oo::class create fileeventHandler {
variable m_buffer m_chan
constructor {chan} {
set m_chan $chan
fileevent $m_chan readable [list [self object] handle]
set m_buffer {}
}
method data {} {
return $m_buffer
}
method handle {} {
if {[gets $m_chan line]} >= 0 {
append m_buffer $line \n
}
}
}
.
.
.
set handlers($chan) [fileeventHandler new $chan]; # Save object address for later use
ReadIO $io 5000
# Access the input via [handlers($chan) data]