I have a GUI built in Tcl/Tk that has a button which will tar up a directory.
The directory can be very large so I don't want to lock the GUI up while it waits.
To achieve this, I am using open for a command pipe and vwait, however the GUI still becomes unresponsive while the tar is running. This is my code:
set ::compress_result 0
set pipe [open "|$tar_executable -cf $folder_to_tar.tar $folder_to_tar" r+]
fileevent $pipe readable "set ::compress_result [gets $pipe line]"
vwait ::compress_result
set return_value $::compress_result
unset ::compress_result
close $pipe
Why does this still block the Tcl Event Loop and lock up the GUI?
The key problem you've got is this line:
fileevent $pipe readable "set ::compress_result [gets $pipe line]"
This reads a line immediately from the pipe because the [gets …] is in a double-quoted string. Changing to this:
fileevent $pipe readable {set ::compress_result [gets $pipe line]}
Makes things work since it postpones the reading from the pipe until the pipe becomes readable. However, to do so it relies on the pipe variable being global. It's actually better to do this:
fileevent $pipe readable [list apply {pipe {
global compress_result
set compress_result [gets $pipe line]
}} $pipe]
Which is pretty ugly and awkward to debug, so instead we actually use a helper procedure:
proc pipedone {pipe} {
global compress_result
set compress_result [gets $pipe line]
}
fileevent $pipe readable [list pipedone $pipe]
The use of list here does “quote this as a runnable script for later”, taking care of any unexpected trickiness you might have in the variable. It knows how to quote things properly so you don't have to.
In Tcl 8.6, you'd be better using a coroutine.
coroutine piperead apply {{tar folder} {
# Open the pipe
set pipe [open |[list $tar -cf $folder.tar $folder] "r"]
# Wait until readable
fileevent $pipe readable [info coroutine]
yield
# Read and close
set return_result [gets $pipe line]
close $pipe
return $return_result
}} $tar_executable $folder_to_tar
Adding
fconfigure $pipe -blocking false
after the opening the pipe will help.
Related
How do I save the cursor position into a variable in TCL ?
The best I can come up with is:
set tty [open /dev/tty w]
puts $tty "\033\[6n"
close $tty
set position [read stdin]
puts $position
but I cannot capture the output.
I have also tried using ::term::ansi::send::qcp from Tcllib but get the same problem.
(FYI I tried to model the above TCL on a PHP answer from How to get cursor position with PHP-CLI?).
It's tricky to get this right, as we need to read a partial line and it's not yielded by the terminal implementation immediately. After experimenting a bit, the most reliable approach that I found was this:
proc readbits {{chan stdin}} {
set s ""
while {![eof $chan]} {
set c [read $chan 1]
append s $c
if {$c eq "R"} break
}
return $s
}
proc getcur {} {
puts -nonewline "\u001b\[6n"
flush stdout
scan [readbits] "\u001b\[%d;%dR"
}
# Must be in raw+noecho mode for this to work!
exec stty raw -echo
puts [getcur]
# Restore normal mode
exec stty -raw echo
In Tcl 8.7, you can use: fconfigure stdin -mode raw and fconfigure stdin -mode normal in place of those exec stty calls.
Hi I am using this piece of code for inserting pipe in TCL. Can anybody please let me understand when this condition [gets $pipe line] >= 0 fails.
For eg: only when [gets $pipe line] is a negative number this will fail.
In my case it is never returning a negative number and the TestEngine hangs forever
set pipeline [open "|Certify.exe filename" "r+"]
fileevent $pipeline readable [list handlePipeReadable $pipeline]
fconfigure $pipeline -blocking 0
proc handlePipeReadable {pipe} {
if {[gets $pipe line] >= 0} {
# Managed to actually read a line; stored in $line now
} elseif {[eof $pipe]} {
# Pipeline was closed; get exit code, etc.
if {[catch {close $pipe} msg opt]} {
set exitinfo [dict get $opt -errorcode]
} else {
# Successful termination
set exitinfo ""
}
# Stop the waiting in [vwait], below
set ::donepipe $pipe
} else {
puts ""
# Partial read; things will be properly buffered up for now...
}
}
vwait ::donepipe
The gets command (when given a variable to receive the line) returns a negative number when it is in a minor error condition. There are two such conditions:
When the channel has reached end-of-file. After the gets the eof command (applied to the channel) will report a true value in this case.
When the channel is blocked, i.e., when it has some bytes but not a complete line (Tcl has internal buffering to handle this; you can get the number of pending bytes with chan pending). You only see this when the channel is in non-blocking mode (because otherwise the gets will wait indefinitely). In this case, the fblocked command (applied to the channel) will return true.
Major error conditions (such as the channel being closed) result in Tcl errors.
If the other command only produces partial output or does something weird with buffering, you can get an eternally blocked pipeline. It's more likely with a bidirectional pipe, such as you're using, as the Certify command is probably waiting for you to close the other end. Can you use it read-only? There are many complexities to interacting correctly with a process bidirectionally! (For example, you probably want to make the pipe's output buffering mode be unbuffered, fconfigure $pipeline -buffering none.)
Please find the way the certify process is being triggered from the command prompt and the print statements are given just for the understanding. At the end the process hangs and the control is not transferred back to the TCL
From the documentation for gets:
If varName is specified and an empty string is returned in varName because of end-of-file or because of insufficient data in nonblocking mode, then the return count is -1.
Your script is working completely fine. checked with set pipeline [open "|du /usr" "r+"]
instead of your pipe and included puts "Line: $line" to check the result. So its clear that there is some problem in Certify command. Can you share your command, how do you use on terminal and how did you use with exec?
################### edited by Drektz
set pipeline [open "|du /usr" "r+"]
fileevent $pipeline readable [list handlePipeReadable $pipeline]
fconfigure $pipeline -blocking 0
proc handlePipeReadable {pipe} {
if {[gets $pipe line] >= 0} {
# Managed to actually read a line; stored in $line now
################### included by Drektz
puts "Line: $line"
} elseif {[eof $pipe]} {
# Pipeline was closed; get exit code, etc.
if {[catch {close $pipe} msg opt]} {
set exitinfo [dict get $opt -errorcode]
} else {
# Successful termination
set exitinfo ""
}
# Stop the waiting in [vwait], below
set ::donepipe $pipe
} else {
puts ""
# Partial read; things will be properly buffered up for now...
}
}
vwait ::donepipe
you can see it in the CMDwith& screenshot mentioned.Is there any workaround that I have to overcome the issue
Please see the issue when run with the help of exec executable args &
I need LIST outside of proc for further procession. But puts $LIST shows an error message no such variable.
I have also tried upvar 0# LIST LIST instead of global with the same result.
I suspect, the troublemaker is calling proc with "list ..... If I ommit "list" in calling proc, the command global does what it should,
but of course the code as a whole isn't working properly anymore.
proc receiver {chan} {
global LIST
set data [gets $chan]
set LIST [split $data ,]
}
puts $LIST
set chan [open com5 r]
fconfigure $chan -mode "9600,n,8,1" -blocking 1 -buffering none -translation binary
fileevent $chan readable [list receiver $chan]
How can I get access to LIST in the global space outside of proc?
The problem is partially that the variable hasn't been written at all yet when the puts command is called, and partially that you are not actually used to working asynchronously.
You need to wait for something to arrive before you can print the variable out. The vwait command is ideal for this (as it runs the Tcl event loop while waiting). We can tell it to wait for the (global) LIST variable to be written to: when it has been, we can safely read it.
proc receiver {chan} {
global LIST
set data [gets $chan]
set LIST [split $data ","]
}
set chan [open com5 r]
fconfigure $chan -mode "9600,n,8,1" -blocking 1 -buffering none -translation binary
fileevent $chan readable [list receiver $chan]
vwait LIST
puts $LIST
I think you must declare LIST as global in root namespace:
proc receiver {chan} {
global LIST
set data [gets $chan]
set LIST [split $data ,]
}
global LIST
puts $LIST
set chan [open com5 r]
fconfigure $chan -mode "9600,n,8,1" -blocking 1 -buffering none -translation binary
fileevent $chan readable [list receiver $chan]
It's almost ok but the variables incoming over the serial port are
updated every second
If that is the requirement, then:
(1) Have the value returned from the channel be printed in the callback proc: receiver
(2) Enter the event loop just once, w/o binding to a global variable List or the like.
proc receiver {chan} {
set data [gets $chan]
puts [split $data ","]; # (1) print the return value
}
set chan [open com5 r]
fconfigure $chan -mode "9600,n,8,1" -blocking 1 -buffering none -translation binary
fileevent $chan readable [list receiver $chan]
vwait forever
puts "Quitted from event loop ..."
This will enter into an event loop that is bound to an undefined variable forever, not set from within your script. So it will not quit unless you stop the executable (e.g., tclsh) or unless you do not provide for an explicit ending condition, e.g.:
proc receiver {chan} {
global counter
set data [gets $chan]
puts [split $data ","]; # (1) print the return value
if {[incr counter] == 5} {
global forever
set forever 1; # (2) Exit the event loop, after receiver having been called 5 times
}
}
Currently I am firing following command
set pid [exec make &]
set term_id [wait pid]
First command will execute makefile inside TCL and Second Command will wait for first command's makefile operation to complete. First command displays all logs of makefile on stdout. Is it possible to store all logs in variable or file when "&" is given in the last argument of exec using redirection or any other method?
If "&" is not given then we can take the output using,
set log [exec make]
But if "&" is given then command will return process id,
set pid [exec make &]
So is it possible stop the stdout logs and put them in variable?
If you are using Tcl 8.6, you can capture the output using:
lassign [chan pipe] reader writer
set pid [exec make >#$writer &]
close $writer
Don't forget to read from the $reader or the subprocess will stall. Be aware that when used in this way, the output will be delivered fully-buffered, though this is more important when doing interactive work. If you want the output echoed to standard out as well, you will need to make your script do that. Here's a simple reader handler.
while {[gets $reader line] >= 0} {
lappend log $line
puts $line
}
close $reader
Before Tcl 8.6, your best bet would be to create a subprocess command pipeline:
set reader [open |make]
If you need the PID, this can become a bit more complicated:
set reader [open |[list /bin/sh -c {echo $$; exec make}]]
set pid [gets $reader]
Yes, that's pretty ugly…
[EDIT]: You're using Tk, in Tcl 8.5 (so you need the open |… pipeline form from above), and so you want to keep the event loop going. That's fine. That's exactly what fileevent is for, but you have to think asynchronously.
# This code assumes that you've opened the pipeline already
fileevent $reader readable [list ReadALine $reader]
proc ReadALine {channel} {
if {[gets $channel line] >= 0} {
HandleLine $line
} else {
# No line could be read; must be at the end
close $channel
}
}
proc HandleLine {line} {
global log
lappend log $line; # Or insert it into the GUI or whatever
puts $line
}
This example does not use non-blocking I/O. That might cause an issue, but probably won't. If it does cause a problem, use this:
fconfigure $reader -blocking 0
fileevent $reader readable [list ReadALine $reader]
proc ReadALine {channel} {
if {[gets $channel line] >= 0} {
HandleLine $line
} elseif {[eof $channel]} {
close $channel
}
}
proc HandleLine {line} {
global log
lappend log $line
puts $line
}
More complex and versatile versions are possible, but they're only really necessary once you're dealing with untrusted channels (e.g., public server sockets).
If you'd been using 8.6, you could have used coroutines to make this code look more similar to the straight-line code I used earlier, but they're a feature that is strictly 8.6 (and later, once we do later versions) only as they depend on the stack-free execution engine.
I have a question regarding named pipes in tcl.
First I created the pipe with mkfifo:
mkfifo foo
Then execute the following tcl script:
set fifo [open "foo" r]
fconfigure $fifo -blocking 1
proc read_fifo {} {
global fifo
puts "calling read_fifo"
gets $fifo x
puts "x is $x"
}
puts "before file event"
fileevent $fifo readable read_fifo
puts "after file event"
When i run the tcl script it waits for an event without outputting anything.
Then, when I write to the fifo:
echo "hello" > foo
Now, the tcl scripts prints out :
before file event
after file event
Why is 'read_fifo' function call not getting triggered here ?
Could anyone help me in understanding this behaviour.
fileevent relies on the the eventloop, which you don't enter.
fileevent just tells Tcl to call read_fifo when it is readable.
If you want blocking IO, then just call gets. This blocks until an entire line has been read.
set fifo [open "foo" r]
fconfigure $fifo -blocking 1
gets $fifo x
puts "x is $x"
If you do it event-driven, you need fileevent, use non-blocking IO and you have to enter the event-loop (e.g. with vwait forever).
set fifo [open "foo" r]
fconfigure $fifo -blocking 0
proc read_fifo {fifo} {
puts "calling read_fifo"
if {[gets $fifo x] < 0} {
if {[eof $fifo]} {
# Do some cleanup here.
close $fifo
}
}
puts "x is $x"
}
fileevent $fifo readable [list read_fifo $fifo]
vwait forever; #enter the eventloop
Don't mix event-driven with blocking IO. This does not really work.
Note that you don't have to call vwait in Tk, doing so would reenter the event-loop, which is considered bad practice.