handling multiple processes in tcl/expect - tcl

I am trying to deal with two processes, which have to run simultaneously in expect. However, I keep getting the message that one of those processes does not exist.
Here is a minimal (not) working example (I am not really working with ftp, but thats something that will run for other people):
#!/usr/bin/expect
set spawn_id_bash [spawn /bin/bash]
set spawn_id_ftp [spawn ftp ftp.ccc.de]
send "anonymous\n"
expect {
"*Password*" {
puts "\nftp works"
}
default {
puts "\nftp defaulted"
}
}
set spawn_id $spawn_id_bash
send "uname\n"
expect {
"*Linux*" {
puts "\nbash works"
}
default {
puts "\nbash defaulted"
}
}
Unfortunately, the output is:
[martin#martin linuxhome]$ /tmp/blub.tcl
spawn /bin/bash
spawn ftp ftp.ccc.de
anonymous
Trying 212.201.68.160...
Connected to ftp.ccc.de (212.201.68.160).
220-+-+-+-+-+-+-+-+-+
220-|o|b|s|o|l|e|t|e|
220-+-+-+-+-+-+-+-+-+
220-
220-
220-Please use HTTP instead:
220-
220-* http://cdn.media.ccc.de
220
Name (ftp.ccc.de:martin): 331 Please specify the password.
Password:ftp works
can not find channel named "4648"
while executing
"send "uname\n""
(file "/tmp/blub.tcl" line 19)
I have followed the book "Exploring Expect" while writing this example and I do not see what I do differently.
I also tried using send -i and expect -i without any luck (the error message is gone, but otherwise -i seems to be ignored).

spawn returns the unix process id (PID, an integer), not the spawn_id (a string). For example:
# cat foo.exp
send_user "[spawn -noecho sleep 1] $spawn_id\n"
expect eof
# expect foo.exp
20039 exp6
#
You should write like this:
spawn /bin/bash
set spawn_id_bash $spawn_id
spawn ftp ftp.ccc.de
set spawn_id_ftp $spawn_id
Then you can use expect -i and send -i.

Related

How to copy from spawned expect process into file?

I have a Raspberry Pi image running via a qemu emulator, which I interact with via expect.
I'm trying to capture the output from a particular command within the emulator, and save it to a file on the host.
Being a beginner with Tcl, I read through the manual and had a go at this. The "test.out" file is created but contains only a newline, while "Hello world!" appears on the console.
spawn qemu-system-arm --serial mon:stdio ...
expect {
"login:" { send "pi\r" }
}
expect {
"Password:" { send "raspberry\r" }
}
expect "pi#raspberrypi"
set ftty [exp_open -leaveopen]
set fsignature [open "test.out" w]
send "echo 'Hello world!'\r"
puts $fsignature [gets $ftty]
expect "pi#raspberrypi"
send "sudo shutdown now\r"
wait
I'm not familiar with exp_open. I would normally recommend something like this to capture command output:
set prompt {pi#raspberrypi}
set cmd {echo 'hello world'}
send "$cmd\r"
expect -re "$cmd\r\n(.*)\r\n$prompt"
puts $fsignature $expect_out(1,string)
Extracting command output can be tricky, because the sent command is (typically) displayed and is included in the expect output. This assumes that your specified prompt appears first in its line.
This answer was very useful in finding a solution.
However, for long outputs you need to account for the buffer filling up.
set fd [open "test.out" w]
send "cat large_output\r"
expect {
-re {cat large_output[\r\n]+} { log_user 0; exp_continue }
-ex "\n" { puts -nonewline $fd $expect_out(buffer); exp_continue }
-re $prompt { log_user 1; close $fd }
}
If the line length can exceed the buffer size then something more complicated is needed.
For some reason, the line endings are \r\r\n, but that can be fixed with a sed.
sed -i 's/\r//g' test.out

How to send more than 100 cmd lines

I have expect (tcl) script for automated task working properly - configuring network devices via telnet/ssh. Most of the cases there is 1,2 or 3 command lines to execute, BUT now I have more then 100 command lines to send via expect. How can I achieved this in smart and good scripting way :)
Because I can join all command lines over 100 to a variable "commandAll" with "\n" and "send" them one after another, but I think it's pretty ugly :) Is there a way without stacking them together to be readable in code or external file ?
#!/usr/bin/expect -f
set timeout 20
set ip_address "[lrange $argv 0 0]"
set hostname "[lrange $argv 1 1]"
set is_ok ""
# Commands
set command1 "configure snmp info 1"
set command2 "configure ntp info 2"
set command3 "configure cdp info 3"
#... more then 100 dif commands like this !
#... more then 100 dif commands like this !
#... more then 100 dif commands like this !
spawn telnet $ip_address
# login & Password & Get enable prompt
#-- snnipped--#
# Commands execution
# command1
expect "$enableprompt" { send "$command1\r# endCmd1\r" ; set is_ok "command1" }
if {$is_ok != "command1"} {
send_user "\n### 9 Exit before executing command1\n" ; exit
}
# command2
expect "#endCmd1" { send "$command2\r# endCmd2\r" ; set is_ok "command2" }
if {$is_ok != "command2"} {
send_user "\n### 9 Exit before executing command2\n" ; exit
}
# command3
expect "#endCmd2" { send "$command3\r\r\r# endCmd3\r" ; set is_ok "command3" }
if {$is_ok != "command3"} {
send_user "\n### 9 Exit before executing command3\n" ; exit
}
p.s. I'm using one approach for cheeking is given cmd line is executed successfully but I'm not certain that is perfect way :D
don't use numbered variables, use a list
set commands {
"configure snmp info 1"
"configure ntp info 2"
"configure cdp info 3"
...
}
If the commands are already in a file, you can read them into a list:
set fh [open commands.file]
set commands [split [read $fh] \n]
close $fh
Then, iterate over them:
expect $prompt
set n 0
foreach cmd $commands {
send "$cmd\r"
expect {
"some error string" {
send_user "command failed: ($n) $cmd"
exit 1
}
timeout {
send_user "command timed out: ($n) $cmd"
exit 1
}
$prompt
}
incr n
}
While yes, you can send long sequences of commands that way, it's usually a bad idea as it makes the overall script very brittle; if anything unexpected happens, the script just keeps on forcing the rest of the script over. Instead, it is better to have a sequence of sends interspersed with expects to check that what you've sent has been accepted. The only real case for sending a very long string over is when you're creating a function or file on the other side that will act as a subprogram that you call; in that case, there's no really meaningful place to stop and check for a prompt half way. But that's the exception.
Note that you can expect two things at once; that's often very helpful as it lets you check for errors directly. I mention this because it is a technique often neglected, yet it allows you to make your script far more robust.
...
send "please run step 41\r"
expect {
-re {error: (.*)} {
puts stderr "a problem happened: $expect_out(1,string)"
exit 1
}
"# " {
# Got the prompt; continue with the next step below
}
}
send "please run step 42\n"
...

Problems looping to prompt for another password

I need some help with an EXPECT script please....
I'm trying to automate a login, prior to accessing a load of hosts, and cater for when a user enters a password incorrectly. I am getting the username and password first, and then validating this against a particular host. If the password is invalid, I want to loop round and ask for the username and password again.
I am trying this :-
(preceding few irrelevant lines omitted)
while {1} {
send_user "login as:- "
expect -re "(.*)\n"
send_user "\n"
set user $expect_out(1,string)
stty -echo
send_user "password: "
expect -re "(.*)\n"
set password $expect_out(1,string)
stty echo
set host "some-box.here.there.co.uk"
set hostname "some-box"
set host_unknown 0
spawn ssh $user#$host
while {1} {
expect {
"Password:" {send $password\n
break}
"(yes/no)?" {send "yes\n"}
"Name or service not known" {set host_unknown 1
break}
}
}
if {$host_unknown < 1} {
expect {
"$hostname#" {send "exit\r"
break
}
"Password:" {send \003
expect eof
close $spawn_id
puts "Invalid Username or Password - try again..."
}
}
} elseif {$host_unknown > 0} {
exit 0}
}
puts "dropped out of loop"
And now I can go off and do lots of stuff to lots of boxes .....
This works fine when I enter a valid username or password, and my script goes off and does all the other stuff I want, but when I enter an invalid password I get this :-
Fred#Aserver:~$ ./Ex_Test.sh ALL
login as:- MyID
password: spawn ssh MyID#some-box.here.there.co.uk
Password:
Password:
Invalid Username or Password - try again...
login as:- cannot find channel named "exp6"
while executing "expect -re "(.*)\n""
invoked from within "if {[lindex $argv 1] != ""} {
puts "Too many arguments"
puts "Usage is:- Ex_Test.sh host|ALL"
} elseif {[lindex $argv 0] != ""} {
while {1} {
..."
(file "./Ex_Test.sh" line 3)
Its the line "can not find channel named "exp6" which is really bugging me.
What am I doing wrong? I am reading Exploring Expect (Don Lines) but getting nowhere....
Whenever expect is supposed to wait for some word, it will save the spawn_id for that expect process into expect_out(spawn_id).
As per your code, expect's spawn_id is generated when it encounters
expect -re "(.*)\n"
When user typed something and pressed enter key, it will save the expect's spawn_id. If you have used expect with debugging, you might have seen the following in the debugging output
expect does "" (spawn_id exp0) match regular expression "(.*)\n"
Lets say user typed 'Simon', then the debugging output will be
expect: does "Simon\n" (spawn_id exp0) match regular expression "(.*)\n"? Gate "*\n"? gate=yes re=yes
expect: set expect_out(0,string) "Simon\n"
expect: set expect_out(1,string) "Simon"
expect: set expect_out(spawn_id) "exp0"
expect: set expect_out(buffer) "Simon\n"
As you can see, the expect_out(spawn_id) holds the spawn_id from which it has to expect for values. In this case, the term exp0 pointing the standard input.
If spawn command is used, then as you know, the tcl variable spawn_id holds the reference to the process handle which is known as the spawn handle. We can play around with spawn_id by explicitly setting the process handle and save it for future reference. This is one good part.
As per your code, you are closing the ssh connection when wrong password given with the following code
close $spawn_id
By taking advantage of spawn_id, you are doing this and what you are missing is that setting the expect's process handle back to it's original reference handle. i.e.
While {1} {
###Initial state. Nothing present in spawn_id variable ######
expect "something here"; #### Now exp0 will be created
###some code here ####
##Spawning a process now###
spawn ssh xyz ##At this moment, spawn_id updated
###doing some operations###
###closing ssh with some conditions###
close $spawn_id
##Loop is about to end and still spawn_id has the reference to ssh process
###If anything present in that, expect will assume that might be current process
###so, it will try to expect from that process
}
When the loop executes for the 2nd time, expect will try to expect commands from the spawn_id handle which is nothing but ssh process which is why you are getting the error
can not find channel named "exp6"
Note that the "exp6" is nothing but the spawn handle for the ssh process.
Update :
If some process handle is available in the
spawn_id, then expect will always expect commands from that
process only.
Perhaps you can try something like the following to avoid these.
#Some reference variable
set expect_init_spawn_id 0
while {1} {
if { $expect_spawn_id !=0 } {
#when the loop enters from 2nd iteration,
#spawn_id is explicitly set to initial 'exp0' handle
set spawn_id $expect_init_spawn_id
}
expect -re "(.*)\n"
#Saving the init spawn id of expect process
#And it will have the value as 'exp0'
set expect_init_spawn_id $expect_out(spawn_id)
spawn ssh xyz
##Manipulations here
#closing ssh now
close $spawn_id
}
This is my opinion and it may not be the efficient approach. You can also think of your own logic to handle these problems.
You simply need to store the $spawn_id as a temp variable before a nested expect command, then set the $spawn_id to the temp variable after a nested expect command.
Also, get rid of the while {1} loops. They are not needed because expect behaves like a loop provided you use exp_continue whenever you don't wish to exit. You don't need expect eof nor do you need close $spawn_id. I don't use them in the following example:
#!/usr/bin/expect
set domain [lindex $argv 0];
set timeout 300
spawn ./certbot-add.sh $domain
expect {
"*replace the certificate*" {
send "2\r";
exp_continue;
}
"*_acme-challenge*" {
puts [open output.txt w] $expect_out(buffer)
spawn ./acme-add.sh $domain
set tmp_spawn_id $spawn_id
expect {
"$ "
}
set spawn_id $tmp_spawn_id
send "\r";
exp_continue;
}
"*certificate expires on*" {
puts "Certificate Added!"
}
}

Spawn multiple telnet with tcl and log the output separately

I'm trying to telnet to multiple servers with spawn & i want to log the output of each in a separate files.
If i use the spawn with 'logfile' then, it is logging into a same file. But i want to have it in different files. How to do this?
Expect's logging support (i.e., what the log_file command controls) doesn't let you set different logging destinations for different spawn IDs. This means that the simplest mechanism for doing what you want is to run each of the expect sessions in a separate process, which shouldn't be too hard provided you don't use the interact command. (The idea of needing to interact with multiple remote sessions at once is a bit strange! By the time you've made it sensible by grafting in something like the screen program, you might as well be using separate expect scripts anyway.)
In the simplest case, your outer script can be just:
foreach host {foo.example.com bar.example.com grill.example.com} {
exec expect myExpectScript.tcl $host >#stdout 2>#stderr &
}
(The >#stdout 2>#stderr & does “run in the background with stdout and stderr connected to the usual overall destinations.)
Things get quite a bit more complicated if you want to automatically hand information about between the expect sessions. I hope that simple is good enough…
I have found something from the link
http://www.highwind.se/?p=116
LogScript.tcl
#!/usr/bin/tclsh8.5
package require Expect
proc log_by_trace {array element op} {
uplevel {
global logfile
set file $logfile($expect_out(spawn_id))
puts -nonewline $file $expect_out(buffer)
}
}
array set spawns {}
array set logfile {}
# Spawn 1
spawn ./p1.sh
set spawns(one) $spawn_id
set logfile($spawn_id) [open "./log1" w]
# Spawn 2
spawn ./p2.sh
set spawns(two) $spawn_id
set logfile($spawn_id) [open "./log2" w]
trace add variable expect_out(buffer) write log_by_trace
proc flush_logs {} {
global expect_out
global spawns
set timeout 1
foreach {alias spawn_id} [array get spawns] {
expect {
-re ".+" {exp_continue -continue_timer}
default { }
}
}
}
exit -onexit flush_logs
set timeout 5
expect {
-i $spawns(one) "P1:2" {puts "Spawn1 got 2"; exp_continue}
-i $spawns(two) "P2:2" {puts "spawn2 got 2"; exp_continue}
}
p1.sh
#!/bin/bash
i=0
while sleep 1; do
echo P1:$i
let i++
done
p2.sh
#!/bin/bash
i=0
while sleep 1; do
echo P2:$i
let i++
done
It is working perfectly :)

How do I check for spawn_id that's alive? (TCL)

I spawn a telnet process to a host. I send a command, expect something
in return. This goes on for a while. But somewhere in between this
interaction, the connection to the host is lost mysteriously and my
script dies while trying to "send" something to the spawned (now dead)
telnet process.
I'd like to write a procedure that takes the spawn id and the command
to be sent as arguments. I'd like to check if the spawn id exists
(i.e., the connection between the program and the host exists) before I
"send" the command. Otherwise, I'd like to exit.
Something like this:
proc Send {cmd sid} {
if { $sid is not dead yet } { ;## don't know how to do this
part
send -i $sid "$cmd\r"
} else {
puts "channel id: $sid does not exist anymore. Exiting"
exit
}
}
Rather than checking if the spawned process is still alive, you could catch the error that send raises when sending to a dead process:
proc Send {cmd sid} {
if {[catch {send -i $sid "$cmd\r"} err]} {
puts "error sending to $sid: $err"
exit
}
}
I ran into this problem before and used the Mac/Linux ps command to do that:
if {[catch {exec ps $pid} std_out] == 0} {
puts "Alive"
} else {
puts "It's dead, Jim"
}
If you are using Windows, I heard that the tlist.exe command does something similar, but I don't have a Windows machine to test it out.