I'm trying to telnet to multiple servers with spawn & i want to log the output of each in a separate files.
If i use the spawn with 'logfile' then, it is logging into a same file. But i want to have it in different files. How to do this?
Expect's logging support (i.e., what the log_file command controls) doesn't let you set different logging destinations for different spawn IDs. This means that the simplest mechanism for doing what you want is to run each of the expect sessions in a separate process, which shouldn't be too hard provided you don't use the interact command. (The idea of needing to interact with multiple remote sessions at once is a bit strange! By the time you've made it sensible by grafting in something like the screen program, you might as well be using separate expect scripts anyway.)
In the simplest case, your outer script can be just:
foreach host {foo.example.com bar.example.com grill.example.com} {
exec expect myExpectScript.tcl $host >#stdout 2>#stderr &
}
(The >#stdout 2>#stderr & does “run in the background with stdout and stderr connected to the usual overall destinations.)
Things get quite a bit more complicated if you want to automatically hand information about between the expect sessions. I hope that simple is good enough…
I have found something from the link
http://www.highwind.se/?p=116
LogScript.tcl
#!/usr/bin/tclsh8.5
package require Expect
proc log_by_trace {array element op} {
uplevel {
global logfile
set file $logfile($expect_out(spawn_id))
puts -nonewline $file $expect_out(buffer)
}
}
array set spawns {}
array set logfile {}
# Spawn 1
spawn ./p1.sh
set spawns(one) $spawn_id
set logfile($spawn_id) [open "./log1" w]
# Spawn 2
spawn ./p2.sh
set spawns(two) $spawn_id
set logfile($spawn_id) [open "./log2" w]
trace add variable expect_out(buffer) write log_by_trace
proc flush_logs {} {
global expect_out
global spawns
set timeout 1
foreach {alias spawn_id} [array get spawns] {
expect {
-re ".+" {exp_continue -continue_timer}
default { }
}
}
}
exit -onexit flush_logs
set timeout 5
expect {
-i $spawns(one) "P1:2" {puts "Spawn1 got 2"; exp_continue}
-i $spawns(two) "P2:2" {puts "spawn2 got 2"; exp_continue}
}
p1.sh
#!/bin/bash
i=0
while sleep 1; do
echo P1:$i
let i++
done
p2.sh
#!/bin/bash
i=0
while sleep 1; do
echo P2:$i
let i++
done
It is working perfectly :)
Related
I have a Raspberry Pi image running via a qemu emulator, which I interact with via expect.
I'm trying to capture the output from a particular command within the emulator, and save it to a file on the host.
Being a beginner with Tcl, I read through the manual and had a go at this. The "test.out" file is created but contains only a newline, while "Hello world!" appears on the console.
spawn qemu-system-arm --serial mon:stdio ...
expect {
"login:" { send "pi\r" }
}
expect {
"Password:" { send "raspberry\r" }
}
expect "pi#raspberrypi"
set ftty [exp_open -leaveopen]
set fsignature [open "test.out" w]
send "echo 'Hello world!'\r"
puts $fsignature [gets $ftty]
expect "pi#raspberrypi"
send "sudo shutdown now\r"
wait
I'm not familiar with exp_open. I would normally recommend something like this to capture command output:
set prompt {pi#raspberrypi}
set cmd {echo 'hello world'}
send "$cmd\r"
expect -re "$cmd\r\n(.*)\r\n$prompt"
puts $fsignature $expect_out(1,string)
Extracting command output can be tricky, because the sent command is (typically) displayed and is included in the expect output. This assumes that your specified prompt appears first in its line.
This answer was very useful in finding a solution.
However, for long outputs you need to account for the buffer filling up.
set fd [open "test.out" w]
send "cat large_output\r"
expect {
-re {cat large_output[\r\n]+} { log_user 0; exp_continue }
-ex "\n" { puts -nonewline $fd $expect_out(buffer); exp_continue }
-re $prompt { log_user 1; close $fd }
}
If the line length can exceed the buffer size then something more complicated is needed.
For some reason, the line endings are \r\r\n, but that can be fixed with a sed.
sed -i 's/\r//g' test.out
I have a case when the Tcl script runs a process, which does fork(), leaves the forked process to run, and then the main process exits. You can try it out simply by running any program that forks to background, for example gvim, provided that it is configured to run in background after execution: set res [exec gvim].
The main process theoretically exits immediately, the child process runs in background, but somehow the main process hangs up, doesn't exit, stays in zombie state (reports as <defunct> in ps output).
In my case the process I'm starting prints something, I want that something and I want that the process exit and I state it done. The problem is that if I spawn the process using open "|gvim" r, then I cannot also recognize the moment when the process has finished. The fd returned by [open] never reports [eof], even when the program turns into zombie. When I try to [read], just to read everything that the process might print, it hangs up completely.
What is more interesting, is that occasionally both the main process and the forked process print something and when I'm trying to read it using [gets], I get both. If I close the descriptor too early, then [close] throws an exception due to broken pipe. Probably that's why [read] never ends.
I need some method to recognize the moment when the main process exits, while this process could have spawned another child process, but this child process may be completely detached and I'm not interested what it does. I want something that the main process prints before exitting and the script should continue its work while the process running in background is also running and I'm not interested what happens to it.
I have a control over the sources of the process I'm starting. Yes, I did signal(SIGCLD, SIG_IGN) before fork() - didn't help.
Tcl clears up zombies from background process calls the next time it calls exec. Since a zombie really doesn't use much resources (just an entry in the process table; there's nothing else there really) there isn't a particular hurry to clean them up.
The problem you were having with the pipeline was that you'd not put it in non-blocking mode. To detect exit of a pipeline, you're best off using a fileevent which will fire when either there's a byte (or more) to read from the pipe or when the other end of the pipe is closed. To distinguish these cases, you have to actually try to read, and that can block if you over-read and you're not in non-blocking mode. However, Tcl makes working with non-blocking mode easy.
set pipeline [open |… "r"]
fileevent $pipeline readable [list handlePipeReadable $pipeline]
fconfigure $pipeline -blocking false
proc handlePipeReadable {pipe} {
if {[gets $pipe line] >= 0} {
# Managed to actually read a line; stored in $line now
} elseif {[eof $pipe]} {
# Pipeline was closed; get exit code, etc.
if {[catch {close $pipe} msg opt]} {
set exitinfo [dict get $opt -errorcode]
} else {
# Successful termination
set exitinfo ""
}
# Stop the waiting in [vwait], below
set ::donepipe $pipeline
} else {
# Partial read; things will be properly buffered up for now...
}
}
vwait ::donepipe
Be aware that using gvim in a pipeline is… rather more complex than usual, as it is an application that users interact with.
You might find it easier to run a simple exec in a separate thread, provided your version of Tcl is thread-enabled and the Thread package is installed. (That ought to be the case if you're using 8.6, but I don't know if that's true.)
package require Thread
set runner [thread::create {
proc run {caller targetVariable args} {
set res [catch {
exec {*}$args
} msg opt]
set callback [list set $targetVariable [list $res $msg $opt]]
thread::send -async $caller $callback
}
thread::wait
}]
proc runInBackground {completionVariable args} {
global runner
thread::send -async $runner [list run [thread::id] $completionVariable {*}$args]
}
runInBackground resultsVar gvim …
# You can do other things at this point
# Wait until the variable is set (by callback); alternatively, use a variable trace
vwait resultsVar
# Process the results to extract the sense
lassign $resultsVar res msg opt
puts "code: $res"
puts "output: $msg"
puts "status dictionary: $opt"
For all that, for an editor like gvim I'd actually expect it to be run in the foreground (which doesn't require anything like as much complexity) since only one of them can really interact with a particular terminal at once.
Your daemon can also call setsid() and setpgrp() to start a new session and to detach from the process group. But these don't help with your problem either.
You will have to do some process management:
#!/usr/bin/tclsh
proc waitpid {pid} {
set rc [catch {exec -- kill -0 $pid}]
while { $rc == 0 } {
set ::waitflag 0
after 100 [list set ::waitflag 1]
vwait ::waitflag
set rc [catch {exec -- kill -0 $pid}]
}
}
set pid [exec ./t1 &]
waitpid $pid
puts "exit tcl"
exit
Edit: Another unreasonable answer
If the forked child process closes the open channels, Tcl will not wait on it.
Test program:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <signal.h>
int
main (int argc, char *argv [])
{
int pid;
FILE *o;
signal (SIGCHLD, SIG_IGN);
pid = fork ();
if (pid == 0) {
/* should also call setsid() and setpgrp() to daemonize */
printf ("child\n");
fclose (stdout);
fclose (stderr);
sleep (10);
o = fopen ("/dev/tty", "w");
fprintf (o, "child exit\n");
fclose (o);
} else {
printf ("parent\n");
sleep (2);
}
printf ("t1 exit %d\n", pid);
return 0;
}
Test Tcl program:
#!/usr/bin/tclsh
puts [exec ./t1]
puts "exit tcl"
At first you say:
I need some method to recognize the moment when the main process exits, while this process could have spawned another child process, but this child process may be completely detached and I'm not interested what it does.
later on you say:
If the forked child process closes the open channels, Tcl will not wait on it.
these are two contradictory statements. One one hand you are only interested in the parent process and on the other whether or not the child has finished even thought you also state you aren't interested in child processes that have detached. Last I heard forking and closing the childs copies of the parents stdin,stdout and stderr is detaching (i.e.daemonizing the child process ). I wrote this quick program to run the above included simple c program and as expected tcl knows nothing of the child process. I called the compiled version of the program /tmp/compile/chuck. I did not have gvim so I used emacs but as emacs does not generate text I wrap the exec in its own tcl script and exec that. In both cases, the parent process is waited for and eof is detected. When the parent exits the Runner::getData runs and the clean up is evaluated.
#!/bin/sh
exec /opt/usr8.6.3/bin/tclsh8.6 "$0" ${1+"$#"}
namespace eval Runner {
variable close
variable watch
variable lastpid ""
array set close {}
array set watch {}
proc run { program { message "" } } {
variable watch
variable close
variable lastpid
if { $message ne "" } {
set fname "/tmp/[lindex $program 0 ]-[pid].tcl"
set out [ open $fname "w" ]
puts $out "#![info nameofexecutable]"
puts $out " catch { exec $program } err "
puts $out "puts \"\$err\n$message\""
close $out
file attributes $fname -permissions 00777
set fd [ open "|$fname " "r" ]
set close([pid $fd]) "file delete -force $fname "
} else {
set fd [ open "|$program" "r" ]
set close([pid $fd]) "puts \"cleanup\""
}
fconfigure $fd -blocking 0 -buffering none
fileevent $fd readable [ list Runner::getData [ pid $fd ] $fd ]
}
proc getData { pid chan } {
variable watch
variable close
variable lastpid
set data [read $chan]
append watch($pid) "$data"
if {[eof $chan]} {
catch { close $chan }
eval $close($pid) ; # cleanup
set lastpid $pid
}
}
}
Runner::run /tmp/compile/chuck ""
Runner::run emacs " Emacs complete"
while { 1 } {
vwait Runner::lastpid
set p $Runner::lastpid
catch { exec ps -ef | grep chuck } output
puts "program with pid $p just ended"
puts "$Runner::watch($p)"
puts " processes that match chuck "
puts "$output"
}
Output :
note I exited out of emacs after the child reported that it was exiting.
[user1#linuxrocks workspace]$ ./test.tcl
cleanup
program with pid 27667 just ended
child
parent
t1 exit 27670
processes that match chuck avahi 936 1 0 2016 ?
00:04:35 avahi-daemon: running [linuxrocks.local] admin 27992 1 0
19:37 pts/0 00:00:00 /tmp/compile/chuck admin 28006 27988 0
19:37 pts/0 00:00:00 grep chuck
child exit
program with pid 27669 just ended
Emacs complete
Ok, I found the solution after a long discussion here:
https://groups.google.com/forum/#!topic/comp.lang.tcl/rtaTOC95NJ0
The below script demonstrates how this problem can be solved:
#!/usr/bin/tclsh
lassign [chan pipe] input output
chan configure $input -blocking no -buffering line ;# just for a case :)
puts "Running $argv..."
set ret [exec {*}$argv 2>#stderr >#$output]
puts "Waiting for finished process..."
set line [gets $input]
puts "FIRST LINE: $line"
puts "DONE. PROCESSES:"
puts [exec ps -ef | grep [lindex $argv 0]]
puts "EXITING."
The only problem that remains is that there's still no possibility to know that the process has exited, however the next [exec] (in this particular case probably the [exec ps...] command did this) cleans up the zombie (No universal method for that - the best you can do on POSIX systems is [exec /bin/true]). In my case it was enough that I get one line that the parent process had to print, after which I can simply "let it go".
Still, it would be nice if [exec] can return me somehow the PID of the first process and there's a standard [wait] command that can block until the process exits or check its running state (this command is currently available in TclX).
Note that [chan pipe] is available only in Tcl 8.6, you can use [pipe] from TclX alternatively.
Currently I am firing following command
set pid [exec make &]
set term_id [wait pid]
First command will execute makefile inside TCL and Second Command will wait for first command's makefile operation to complete. First command displays all logs of makefile on stdout. Is it possible to store all logs in variable or file when "&" is given in the last argument of exec using redirection or any other method?
If "&" is not given then we can take the output using,
set log [exec make]
But if "&" is given then command will return process id,
set pid [exec make &]
So is it possible stop the stdout logs and put them in variable?
If you are using Tcl 8.6, you can capture the output using:
lassign [chan pipe] reader writer
set pid [exec make >#$writer &]
close $writer
Don't forget to read from the $reader or the subprocess will stall. Be aware that when used in this way, the output will be delivered fully-buffered, though this is more important when doing interactive work. If you want the output echoed to standard out as well, you will need to make your script do that. Here's a simple reader handler.
while {[gets $reader line] >= 0} {
lappend log $line
puts $line
}
close $reader
Before Tcl 8.6, your best bet would be to create a subprocess command pipeline:
set reader [open |make]
If you need the PID, this can become a bit more complicated:
set reader [open |[list /bin/sh -c {echo $$; exec make}]]
set pid [gets $reader]
Yes, that's pretty ugly…
[EDIT]: You're using Tk, in Tcl 8.5 (so you need the open |… pipeline form from above), and so you want to keep the event loop going. That's fine. That's exactly what fileevent is for, but you have to think asynchronously.
# This code assumes that you've opened the pipeline already
fileevent $reader readable [list ReadALine $reader]
proc ReadALine {channel} {
if {[gets $channel line] >= 0} {
HandleLine $line
} else {
# No line could be read; must be at the end
close $channel
}
}
proc HandleLine {line} {
global log
lappend log $line; # Or insert it into the GUI or whatever
puts $line
}
This example does not use non-blocking I/O. That might cause an issue, but probably won't. If it does cause a problem, use this:
fconfigure $reader -blocking 0
fileevent $reader readable [list ReadALine $reader]
proc ReadALine {channel} {
if {[gets $channel line] >= 0} {
HandleLine $line
} elseif {[eof $channel]} {
close $channel
}
}
proc HandleLine {line} {
global log
lappend log $line
puts $line
}
More complex and versatile versions are possible, but they're only really necessary once you're dealing with untrusted channels (e.g., public server sockets).
If you'd been using 8.6, you could have used coroutines to make this code look more similar to the straight-line code I used earlier, but they're a feature that is strictly 8.6 (and later, once we do later versions) only as they depend on the stack-free execution engine.
I am trying to deal with two processes, which have to run simultaneously in expect. However, I keep getting the message that one of those processes does not exist.
Here is a minimal (not) working example (I am not really working with ftp, but thats something that will run for other people):
#!/usr/bin/expect
set spawn_id_bash [spawn /bin/bash]
set spawn_id_ftp [spawn ftp ftp.ccc.de]
send "anonymous\n"
expect {
"*Password*" {
puts "\nftp works"
}
default {
puts "\nftp defaulted"
}
}
set spawn_id $spawn_id_bash
send "uname\n"
expect {
"*Linux*" {
puts "\nbash works"
}
default {
puts "\nbash defaulted"
}
}
Unfortunately, the output is:
[martin#martin linuxhome]$ /tmp/blub.tcl
spawn /bin/bash
spawn ftp ftp.ccc.de
anonymous
Trying 212.201.68.160...
Connected to ftp.ccc.de (212.201.68.160).
220-+-+-+-+-+-+-+-+-+
220-|o|b|s|o|l|e|t|e|
220-+-+-+-+-+-+-+-+-+
220-
220-
220-Please use HTTP instead:
220-
220-* http://cdn.media.ccc.de
220
Name (ftp.ccc.de:martin): 331 Please specify the password.
Password:ftp works
can not find channel named "4648"
while executing
"send "uname\n""
(file "/tmp/blub.tcl" line 19)
I have followed the book "Exploring Expect" while writing this example and I do not see what I do differently.
I also tried using send -i and expect -i without any luck (the error message is gone, but otherwise -i seems to be ignored).
spawn returns the unix process id (PID, an integer), not the spawn_id (a string). For example:
# cat foo.exp
send_user "[spawn -noecho sleep 1] $spawn_id\n"
expect eof
# expect foo.exp
20039 exp6
#
You should write like this:
spawn /bin/bash
set spawn_id_bash $spawn_id
spawn ftp ftp.ccc.de
set spawn_id_ftp $spawn_id
Then you can use expect -i and send -i.
My TCL/Expect script:
foreach data $test url $urls {
send "test app website 1\r"
expect "*#"
send "commit \r"
expect "*#"
after 2000
exec cmd.exe /c start iexplore.exe $url
after 3000
exec "C:/WINDOWS/System32/taskkill.exe" /IM IEXPLORE.EXE /F
send "show application stats $data\r"
expect "*#"
expect "?" {
puts [open urllist.txt w] $expect_out(buffer)
}
}
The above script is working fine except storing the output of $expect_out(buffer) into a file.
I want to send $expect_out(buffer) output to "urllist.txt" file continuously.
Please suggest me a way to achieve this.
Thanks in advance.
The following trace procedure writes the value of expect_out (buffer) to a file specific to the spawn id.
proc log_by_tracing {array element op} {
uplevel {
global logfile
set file $logfile($expect_out(spawn_id))
puts -nonewline $file $expect_out(buffer)
}
}
The association between the spawn id and each log file is made in the array logfile which contains a pointer to the log file based on the spawn id. Such an association could be made with the following code when each process is spawned.
spawn <some_app_name_here>
set logfile($spawn_id) [open exp_buffer.log w]
The trace has to be added in the code as
trace variable expect_out(buffer) w log_by_tracing
Internally, the expect command saves the spawn_id element of expect_out after the X, string elements but before the buffer element in the expect_out array. For this reason, the trace must be triggered by the buffer element rather than the spawn_id element.
Note : If you are not bother about much of spawned process or using only one spawned process or no spawned process at all, there is a simple way of doing the same, then it would be much easy.
Consider the following example.
proc log_by_tracing {array element op} {
uplevel {
puts -nonewline $file $expect_out(buffer)
}
}
set file [ open myfile.log w ]
trace variable expect_out(buffer) w log_by_tracing
set timeout 60
expect {
quit { exit 1 }
timeout { exp_continue }
}
If you run the code, whatever you type in console till you type 'quit', the program will run and eventually it will be recorded in the file named 'myfile.log'
You can simply add the proc log_by_tracing and the trace statement into your code. Remember with this simple way, only one instance of expect_out(buffer) can be saved.
Reference : trace, uplevel & EXploring Expect