I have this code that starts a process, expects some startup output, and then logs the rest to a file:
proc foo { } {
set log_fp [open "somefile" a]
exec cp $prog "$prog.elf"
spawn someprog
set someprog_spawn_id $spawn_id
# do some things here that that wait for output from someprog
expect {
-i $someprog_spawn_id
-re "Some output indicating successful startup"
}
# send the process into the background
expect_background {
-i $someprog_spawn_id
full_buffer { }
eof {
wait -i $someprog_spawn_id
close $log_fp
}
-re {^.*\n} {
puts $log_fp $expect_out(buffer)
}
}
}
Unfortunately, this errors with the message:
can't read "log_fp": no such variable
How can I access this variable within this scope?
The expect_background callback scripts are evaluated in the global scope (because the procedure may well have finished at the point when they fire) so you have to put the variable in that scope as well…
proc foo { } {
global log_fp
set log_fp [open "somefile" a]
# ...
Alternatively, with 8.5 you can do some tricks with using apply to make a binding
expect_background "
-i \$someprog_spawn_id
full_buffer { }
[list eof [list apply {log_fp {
wait -i $someprog_spawn_id
close $log_fp
}} $log_fp]]
[list -re {^.*\n} [list apply {log_fp {
puts $log_fp $expect_out(buffer)
}} $log_fp]]
"
Really ugly though. Using a global variable is a lot easier.
Related
I have the following expect script.
This is test.exp
#!/usr/bin/expect
# exp_internal 1
# log_file -noappend ~/expect.log
# Use `send_log` to print to log file
set timeout 30
set bold [exec tput bold]
set red [exec tput setaf 1]
set green [exec tput setaf 2]
set normal [exec tput sgr0]
proc test_label {value} {
upvar bold bold
upvar normal normal
puts "Running ${bold}${value}${normal}…"
}
proc test_send {value} {
sleep 0.1
send "$value"
}
proc test_failed {} {
upvar bold bold
upvar red red
upvar normal normal
sleep 0.1
puts "${bold}${red}Failed${normal}"
exit 1
}
proc test_ok {{force_close false}} {
upvar bold bold
upvar green green
upvar normal normal
sleep 0.1
puts "${bold}${green}OK${normal}"
if {$force_close} {
close
}
}
expect_before {
default {
test_failed
}
}
This is electrum.exp
#!/usr/bin/expect
source ./test.exp
test_label "Should create Electrum mnemonic"
spawn qr-backup.sh --create-electrum-mnemonic
expect {
-re {Format USB flash drive \(y or n\)\?} {
test_send "n\r"
}
}
expect {
-re {\[sudo\] password for pi:} {
test_send "$env(password)\r"
}
}
expect {
-re {Creating Electrum mnemonic…}
}
expect {
-re {([a-z]+ ?){24}} {
test_ok true
}
}
Why doesn’t script fail when last line returned by spawn qr-backup.sh --create-electrum-mnemonic is electrum: error: unrecognized arguments: --nbits 264?
Figured it out!
Solved using eof statement.
expect {
-re {([a-z]+ ?){24}} {
test_ok true
}
eof {
test_failed
}
}
Note this from the expect man page:
expect_before [expect_args]
Unless overridden by a -i flag, expect_before patterns match against the spawn id defined at the time that the expect_before command was executed (not when its pattern is matched).
(emphasis mine)
No spawn id was active when the expect_before command was executed.
I am trying to write an expect script that reacts to input from reading a pipe. Consider this example in file "contoller.sh":
#!/usr/bin/env expect
spawn bash --noprofile --norc
set timeout 3
set success 0
send "PS1='Prompt: '\r"
expect {
"Prompt: " { set success 1 }
}
if { $success != 1 } { exit 1 }
proc do { cmd } {
puts "Got command: $cmd"
set success 0
set timeout 3
send "$cmd\r"
expect {
"Prompt: " { set success 1 }
}
if { $success != 1 } { puts "oops" }
}
set cpipe [open "$::env(CMDPIPE)" r]
fconfigure $cpipe -blocking 0
proc read_command {} {
global cpipe
if {[gets $cpipe cmd] < 0} {
close $cpipe
set cpipe [open "$::env(CMDPIPE)" r]
fconfigure $cpipe -blocking 0
fileevent $cpipe readable read_command
} else {
if { $cmd == "exit" } {
exp_close
exp_wait
exit 0
} elseif { $cmd == "ls" } {
do ls
} elseif { $cmd == "pwd" } {
do pwd
}
}
}
fileevent $cpipe readable read_command
vwait forever;
Suppose you do:
export CMDPIPE=~/.cmdpipe
mkfifo $CMDPIPE
./controller.sh
Now, from another terminal try:
export CMDPIPE=~/.cmdpipe
echo ls >> ${CMDPIPE}
echo pwd >> ${CMDPIPE}
In the first terminal the "Got command: ls/pwd" lines are printed immediately as soon as you press enter on each echo command, but there is no output from the spawned bash shell (no file listing and current directory). Now, try it once more:
echo ls >> ${CMDPIPE}
Suddenly output from the first two commands appears but 3rd command (second ls) is not visible. Keep going and you will notice that there is a "lag" in displayed output which seems to be "buffered" and then dumped at once later.
Why is this happening and how can I fix it?
According to fifo(7):
Normally, opening the FIFO blocks until the other end is opened also.
So, in the proc read_command, it's blocking on set cpipe [open "$::env(CMDPIPE)" r] and does not get the chance to display the spawned process's output until you echo ... >> ${CMDPIPE} again.
To work it around, you can open the FIFO (named pipe) in non-blocking mode:
set cpipe [open "$::env(CMDPIPE)" {RDONLY NONBLOCK} ]
This is also mentioned in fifo(7):
A process can open a FIFO in nonblocking mode. In this case, opening for read-only will succeed even if no one has opened on the write side yet ...
The following is the simplified version of your code and it works fine for me (tested on Debian 9.6).
spawn bash --norc
set timeout -1
expect -re {bash-[.0-9]+[#$] $}
send "PS1='P''rompt: '\r"
# ^^^^
expect "Prompt: "
proc do { cmd } {
send "$cmd\r"
if { $cmd == "exit" } {
expect eof
exit
} else {
expect "Prompt: "
}
}
proc read_command {} {
global cpipe
if {[gets $cpipe cmd] < 0} {
close $cpipe
set cpipe [open cpipe {RDONLY NONBLOCK} ]
fileevent $cpipe readable read_command
} else {
do $cmd
}
}
set cpipe [open cpipe {RDONLY NONBLOCK} ]
fileevent $cpipe readable read_command
vwait forever
I am trying to execute program which has some options, and take as an input txt file. So I have try this:
set myExecutable [file join $::env(path_to_the_program) bin executable_name]
if { ![file exists $myExecutable ] } {
puts "error"
}
if { ![file executable $myExecutable ] } {
puts "error"
}
set arguments [list -option1 -option2]
set status [catch { exec $myExecutable $arguments $txtFileName } output]
if { $status != 0 } {
puts "output = $output"
}
So it's print:
output = Usage: executable_name -option1 -option2 <txt_file_name>
child process exited abnormally
You didn't actually provide the arguments to you executable. Just the textFileName. Try:
set status [catch {exec $myExecutable -option1 -option2 $txtFileName} output]
or if you prefer to keep the arguments in a list:
set status [catch {exec $myExecutable {*}$arguments} output]
where the {*} syntax will cause the list to be expanded in place. In Tcl versions before this was added (8.5) you would use:
set status [catch {eval exec [list $myExecutable] $arguments} output]
where the eval command unwraps the lists so that exec sees a single set of arguments. Adding the extra [list] statement around your $myExecutable protects it's contents against being treated as a list by the interpreter pass.
I'm automating some work with expect, and have something like the following:
# procedure to set background and after patterns
proc foo {} {
expect_after {
-re $regex1 {puts "Matched regex1"; send $command_to_run; exp_continue}
timeout {exp_continue}
}
expect_background {
-re $regex2 {do other things; exp_continue}
-re $regex3 {and more different things; exp_continue}
timeout {exp_continue}
}
}
spawn $thing
foo
expect_user {
-ex "stahp" {exit}
}
This hangs indefinitely after expect_after pattern is matched (and the associated body is run). However, if I move the expect_after and expect_background patterns out of the procedure, then it runs as I, well, expected.
Why does it behave differently when put in a procedure?
Thanks to glenn jackman for the idea! It seems that when called in a procedure, expect_after, expect_background, and probably expect_before not only look for the spawn_id which is in the global scope, but need it specified.
This works:
proc foo {} {
namespace eval global {
expect_after {
-i $spawn_id -re $regex1 {do things}
}
expect_background {
-i $spawn_id -re $regex2 {do more different things}
-i $spawn_id ...
}
}
}
If anyone can explain why it needs -i $spawn_id that would be great, but here's a fix for anyone running into the same problem. Adding a global spawn_id should also work, but I ended up using this as I have about 5-6 variables, half of which I modify in foo.
Here is a code which just implements an interactive TCL session with command prompt MyShell >.
puts -nonewline stdout "MyShell > "
flush stdout
catch { eval [gets stdin] } got
if { $got ne "" } {
puts stderr $got
}
This code prompts MyShell > at the terminal and waits for the enter button to be hit; while it is not hit the code does nothing. This is what the gets command does.
What I need, is some alternative to the gets command, say coolget. The coolget command should not wait for the enter button, but register some slot to be called when it is hit, and just continue the execution. The desired code should look like this:
proc evaluate { string } \
{
catch { eval $string } got
if { $got ne "" } {
puts stderr $got
}
}
puts -nonewline stdout "MyShell > "
flush stdout
coolgets stdin evaluate; # this command should not wait for the enter button
# here goes some code which is to be executed before the enter button is hit
Here is what I needed:
proc prompt { } \
{
puts -nonewline stdout "MyShell > "
flush stdout
}
proc process { } \
{
catch { uplevel #0 [gets stdin] } got
if { $got ne "" } {
puts stderr $got
flush stderr
}
prompt
}
fileevent stdin readable process
prompt
while { true } { update; after 100 }
I think you need to look at the fileevent, fconfigure and vwait commands. Using these you can do something like the following:
proc GetData {chan} {
if {[gets $chan line] >= 0} {
puts -nonewline "Read data: "
puts $line
}
}
fconfigure stdin -blocking 0 -buffering line -translation crlf
fileevent stdin readable [list GetData stdin]
vwait x
This code registers GetData as the readable file event handler for stdin, so whenever there is data available to be read it gets called.
Tcl applies “nohang”-like functionality to the whole channel, and it's done by configuring the channel to be non-blocking. After that, any read will return only the data that is there, gets will only return complete lines that are available without waiting, and puts (on a writable channel) will arrange for its output to be sent to the OS asynchronously. This depends on the event loop being operational.
You are recommended to use non-blocking channels with a registered file event handler. You can combine that with non-blocking to implement your coolget idea:
proc coolget {channel callback} {
fileevent $channel readable [list apply {{ch cb} {
if {[gets $ch line] >= 0} {
uplevel [lappend cb $line]
} elseif {[eof $ch]} {
# Remove handler at EOF: important!
fileevent $ch readable {}
}
}} $channel $callback]
}
That will then work just fine, except that you've got to call either vwait or update to process events (unless you've got Tk in use too; Tk is special) as Tcl won't process things magically in the background; magical background processing causes more trouble than it's worth…
If you're getting deeply tangled in asynchronous event handling, consider using Tcl 8.6's coroutines to restructure the code. In particular, code like Coronet can help a lot. However, that is very strongly dependent on Tcl 8.6, as earlier Tcl implementations can't support coroutines at all; the low-level implementation had to be rewritten from simple C calls to continuations to enable those features, and that's not backport-able with reasonable effort.