Tcl / Expect script driven by name pipe blocks/buffers output unexpectedly - tcl

I am trying to write an expect script that reacts to input from reading a pipe. Consider this example in file "contoller.sh":
#!/usr/bin/env expect
spawn bash --noprofile --norc
set timeout 3
set success 0
send "PS1='Prompt: '\r"
expect {
"Prompt: " { set success 1 }
}
if { $success != 1 } { exit 1 }
proc do { cmd } {
puts "Got command: $cmd"
set success 0
set timeout 3
send "$cmd\r"
expect {
"Prompt: " { set success 1 }
}
if { $success != 1 } { puts "oops" }
}
set cpipe [open "$::env(CMDPIPE)" r]
fconfigure $cpipe -blocking 0
proc read_command {} {
global cpipe
if {[gets $cpipe cmd] < 0} {
close $cpipe
set cpipe [open "$::env(CMDPIPE)" r]
fconfigure $cpipe -blocking 0
fileevent $cpipe readable read_command
} else {
if { $cmd == "exit" } {
exp_close
exp_wait
exit 0
} elseif { $cmd == "ls" } {
do ls
} elseif { $cmd == "pwd" } {
do pwd
}
}
}
fileevent $cpipe readable read_command
vwait forever;
Suppose you do:
export CMDPIPE=~/.cmdpipe
mkfifo $CMDPIPE
./controller.sh
Now, from another terminal try:
export CMDPIPE=~/.cmdpipe
echo ls >> ${CMDPIPE}
echo pwd >> ${CMDPIPE}
In the first terminal the "Got command: ls/pwd" lines are printed immediately as soon as you press enter on each echo command, but there is no output from the spawned bash shell (no file listing and current directory). Now, try it once more:
echo ls >> ${CMDPIPE}
Suddenly output from the first two commands appears but 3rd command (second ls) is not visible. Keep going and you will notice that there is a "lag" in displayed output which seems to be "buffered" and then dumped at once later.
Why is this happening and how can I fix it?

According to fifo(7):
Normally, opening the FIFO blocks until the other end is opened also.
So, in the proc read_command, it's blocking on set cpipe [open "$::env(CMDPIPE)" r] and does not get the chance to display the spawned process's output until you echo ... >> ${CMDPIPE} again.
To work it around, you can open the FIFO (named pipe) in non-blocking mode:
set cpipe [open "$::env(CMDPIPE)" {RDONLY NONBLOCK} ]
This is also mentioned in fifo(7):
A process can open a FIFO in nonblocking mode. In this case, opening for read-only will succeed even if no one has opened on the write side yet ...
The following is the simplified version of your code and it works fine for me (tested on Debian 9.6).
spawn bash --norc
set timeout -1
expect -re {bash-[.0-9]+[#$] $}
send "PS1='P''rompt: '\r"
# ^^^^
expect "Prompt: "
proc do { cmd } {
send "$cmd\r"
if { $cmd == "exit" } {
expect eof
exit
} else {
expect "Prompt: "
}
}
proc read_command {} {
global cpipe
if {[gets $cpipe cmd] < 0} {
close $cpipe
set cpipe [open cpipe {RDONLY NONBLOCK} ]
fileevent $cpipe readable read_command
} else {
do $cmd
}
}
set cpipe [open cpipe {RDONLY NONBLOCK} ]
fileevent $cpipe readable read_command
vwait forever

Related

How to print newly updated lines in tcl

I need to print the lines which are newly added in file.
My code looks as follows:
proc dd {} {
global line_number
set line_number 0
set a [open "pkg.v" r]
#global count
while {[gets $a line]>=0} {
incr line_number
global count
set count [.fr.lst2 size]
puts "enter $count"
if {[eof $a]} {
#.fr.lst2 insert end "$line"
# set count [.fr.lst2 size]
close $a
} elseif {$count > 0} {
.fr.lst2 delete 0 end
if {$count+1} {
.fr.lst2 insert end "$line"
puts "i am $count"
}
} else {
.fr.lst2 insert end "$line"
puts "i am not"
}
}
puts "$count"
}
Assuming we're talking about lines written to the end of a log file on any Unix system (Linux, OSX, etc.) then it's trivially done with the help of tail:
# Make the pipeline to read from 'tail -f'; easy easy stuff!
set mypipe [exec |[list tail -f $theLogfile]]
# Make the pipe be non-blocking; usually a good idea for anything advanced
fconfigure $mypipe -blocking 0
# Handle data being available by calling a procedure which will read it
# The procedure takes two arguments, and we use [list] to build the callback
# script itself (Good Practice in Tcl coding)
fileevent $mypipe readable [list processLine $mypipe .fr.lst2]
proc processLine {pipeline widget} {
if {[gets $pipeline line] >= 0} {
# This is probably too simplistic for you; adapt as necessary
$widget insert end $line
} elseif {[eof $pipeline]} { # Check for EOF *after* [gets] fails!
close $pipeline
}
}

fileevent and after in same event loop

To parse a log file I want to do something like this
tail the file
after some time write parsed data and do other things
Here is my (sample) script
#!/usr/bin/tclsh
proc readfile {fd} {
while {[gets $fd line] >= 0} {
puts $line
}
}
proc writefile {} {
puts xxx
flush stdout
}
if {$::argc > 0} {
set fd [open [list | tail --follow=name --retry [lindex $argv 0] 2>#1]]
} else {
set fd stdin
}
after 3000 writefile
fileevent $fd readable [list readfile $fd]
vwait done
close $fd
Tailing works fine but the script for after isn't triggered.
Any idea what I'm doing wrong?
In the readfile proc, you are using a while which causing it to get stuck in it and that is why the after is not triggered.
#!/usr/bin/tclsh
proc readfile {fd} {
global done
puts "READ FILE CALLED..."
gets $fd line; # Removed 'while' loop here
puts "->>>>$line<<<<<<<<<"
### Your condition to exit the event handler####
### set done 1; #### Changing 'done' value to 1 after that condition ####
### So that the event handler will exit ####3
}
proc writefile {} {
puts "WRITE FILE CALLED"
puts xxx
flush stdout
}
if {$::argc > 0} {
set fd [open [list | tail --follow=name --retry [lindex $argv 0] 2>#1]]
} else {
set fd stdin
}
after 3000 writefile
fileevent $fd readable [list readfile $fd]
vwait done
close $fd
Output :
dinesh#dinesh-VirtualBox:~/pgms/tcl$ ./ubi.tcl
WRITE FILE CALLED
xxx
ubi
READ FILE CALLED...
->>>>ubi<<<<<<<<<
cool
READ FILE CALLED...
->>>>cool<<<<<<<<<
working
READ FILE CALLED...
->>>>working <<<<<<<<<

Cannot access variable within expect_background

I have this code that starts a process, expects some startup output, and then logs the rest to a file:
proc foo { } {
set log_fp [open "somefile" a]
exec cp $prog "$prog.elf"
spawn someprog
set someprog_spawn_id $spawn_id
# do some things here that that wait for output from someprog
expect {
-i $someprog_spawn_id
-re "Some output indicating successful startup"
}
# send the process into the background
expect_background {
-i $someprog_spawn_id
full_buffer { }
eof {
wait -i $someprog_spawn_id
close $log_fp
}
-re {^.*\n} {
puts $log_fp $expect_out(buffer)
}
}
}
Unfortunately, this errors with the message:
can't read "log_fp": no such variable
How can I access this variable within this scope?
The expect_background callback scripts are evaluated in the global scope (because the procedure may well have finished at the point when they fire) so you have to put the variable in that scope as well…
proc foo { } {
global log_fp
set log_fp [open "somefile" a]
# ...
Alternatively, with 8.5 you can do some tricks with using apply to make a binding
expect_background "
-i \$someprog_spawn_id
full_buffer { }
[list eof [list apply {log_fp {
wait -i $someprog_spawn_id
close $log_fp
}} $log_fp]]
[list -re {^.*\n} [list apply {log_fp {
puts $log_fp $expect_out(buffer)
}} $log_fp]]
"
Really ugly though. Using a global variable is a lot easier.

How to get the complete output from expect when the internal buffer size of expect_out(buffer) size exceeds?

I dont know whats happening but i am not getting the complete output from the remote command executed possibly because expects internal buffer is getting execceded.
proc SendCommands { Commands } {
global prompt log errlog
foreach element [split $Commands ";"] {
expect {
-re $prompt
{send -- "$element\r"}
}
set outcome "$expect_out(buffer)"
foreach line [split $outcome \n] {
foreach word [regexp -all -inline {\S+} $line] {
if {( [string index [string trimleft $line " "] 0 ] == "%")} {
puts "$outcome"
exit 1
}
}
}
puts "$outcome"
}
}
set timeout 30
foreach host [ split $hosts "\;" ] {
spawn ssh -o "StrictHostKeyChecking no" "$username#$host"
match_max 10000000
expect {
timeout { send_user "\nFailed to get password prompt\n"; exit 1 }
eof { send_user "\nSSH failure for $host\n"; exit 1 }
"*?assword:*"
{
send -- "$password\r"
}
}
expect {
timeout { send_user "\nLogin incorrect\n"; exit 1 }
eof { send_user "\nSSH failure for $host\n"; exit 1 }
-re "$prompt"
{ send -- "\r" }
}
set timeout 300
SendCommands "$Commands"
}
this is how i am executing it :
./sendcommand aehj SWEs-elmPCI-B-01.tellus comnet1 "terminal length 0;show int description" "(.*#)$"
i am getting the complete output only when i remove log user 0 but when i use the puts command in the fucnction sendcommands above i get about 90 percent of it with 10 percent
of the trailing data at the end is not shown.
one way i am thinking is to use negation of regex in expect but it doesn't seem to work.
expect {
-re ! $prompt
{puts $expect_outcome(buffer)}
}
EDIT :I get all the output once when its executed about 5 or 7 times
After a little search i came up with this and seems to work but let me know of any execptions or better answers :
I set match_max = 1000 then
expect {
-re $prompt
{send -- "$element\r"}
full_buffer {
append outcome $expect_out(buffer)
exp_continue
}
}
append outcome $expect_out(buffer)
puts $outcome
but still when i set match_max = 10000 or 100 it fails again

TCL gets command with kind of -nohang option?

Here is a code which just implements an interactive TCL session with command prompt MyShell >.
puts -nonewline stdout "MyShell > "
flush stdout
catch { eval [gets stdin] } got
if { $got ne "" } {
puts stderr $got
}
This code prompts MyShell > at the terminal and waits for the enter button to be hit; while it is not hit the code does nothing. This is what the gets command does.
What I need, is some alternative to the gets command, say coolget. The coolget command should not wait for the enter button, but register some slot to be called when it is hit, and just continue the execution. The desired code should look like this:
proc evaluate { string } \
{
catch { eval $string } got
if { $got ne "" } {
puts stderr $got
}
}
puts -nonewline stdout "MyShell > "
flush stdout
coolgets stdin evaluate; # this command should not wait for the enter button
# here goes some code which is to be executed before the enter button is hit
Here is what I needed:
proc prompt { } \
{
puts -nonewline stdout "MyShell > "
flush stdout
}
proc process { } \
{
catch { uplevel #0 [gets stdin] } got
if { $got ne "" } {
puts stderr $got
flush stderr
}
prompt
}
fileevent stdin readable process
prompt
while { true } { update; after 100 }
I think you need to look at the fileevent, fconfigure and vwait commands. Using these you can do something like the following:
proc GetData {chan} {
if {[gets $chan line] >= 0} {
puts -nonewline "Read data: "
puts $line
}
}
fconfigure stdin -blocking 0 -buffering line -translation crlf
fileevent stdin readable [list GetData stdin]
vwait x
This code registers GetData as the readable file event handler for stdin, so whenever there is data available to be read it gets called.
Tcl applies “nohang”-like functionality to the whole channel, and it's done by configuring the channel to be non-blocking. After that, any read will return only the data that is there, gets will only return complete lines that are available without waiting, and puts (on a writable channel) will arrange for its output to be sent to the OS asynchronously. This depends on the event loop being operational.
You are recommended to use non-blocking channels with a registered file event handler. You can combine that with non-blocking to implement your coolget idea:
proc coolget {channel callback} {
fileevent $channel readable [list apply {{ch cb} {
if {[gets $ch line] >= 0} {
uplevel [lappend cb $line]
} elseif {[eof $ch]} {
# Remove handler at EOF: important!
fileevent $ch readable {}
}
}} $channel $callback]
}
That will then work just fine, except that you've got to call either vwait or update to process events (unless you've got Tk in use too; Tk is special) as Tcl won't process things magically in the background; magical background processing causes more trouble than it's worth…
If you're getting deeply tangled in asynchronous event handling, consider using Tcl 8.6's coroutines to restructure the code. In particular, code like Coronet can help a lot. However, that is very strongly dependent on Tcl 8.6, as earlier Tcl implementations can't support coroutines at all; the low-level implementation had to be rewritten from simple C calls to continuations to enable those features, and that's not backport-able with reasonable effort.