How to copy from spawned expect process into file? - tcl

I have a Raspberry Pi image running via a qemu emulator, which I interact with via expect.
I'm trying to capture the output from a particular command within the emulator, and save it to a file on the host.
Being a beginner with Tcl, I read through the manual and had a go at this. The "test.out" file is created but contains only a newline, while "Hello world!" appears on the console.
spawn qemu-system-arm --serial mon:stdio ...
expect {
"login:" { send "pi\r" }
}
expect {
"Password:" { send "raspberry\r" }
}
expect "pi#raspberrypi"
set ftty [exp_open -leaveopen]
set fsignature [open "test.out" w]
send "echo 'Hello world!'\r"
puts $fsignature [gets $ftty]
expect "pi#raspberrypi"
send "sudo shutdown now\r"
wait

I'm not familiar with exp_open. I would normally recommend something like this to capture command output:
set prompt {pi#raspberrypi}
set cmd {echo 'hello world'}
send "$cmd\r"
expect -re "$cmd\r\n(.*)\r\n$prompt"
puts $fsignature $expect_out(1,string)
Extracting command output can be tricky, because the sent command is (typically) displayed and is included in the expect output. This assumes that your specified prompt appears first in its line.

This answer was very useful in finding a solution.
However, for long outputs you need to account for the buffer filling up.
set fd [open "test.out" w]
send "cat large_output\r"
expect {
-re {cat large_output[\r\n]+} { log_user 0; exp_continue }
-ex "\n" { puts -nonewline $fd $expect_out(buffer); exp_continue }
-re $prompt { log_user 1; close $fd }
}
If the line length can exceed the buffer size then something more complicated is needed.
For some reason, the line endings are \r\r\n, but that can be fixed with a sed.
sed -i 's/\r//g' test.out

Related

need help in eliminating race condition in my code

My code is running infinitely without coming out of loop.
I am calling expect script from shell script, that is working fine,
the problem here is script is not coming out of timout {} loop.
can someone help me in this regard.
spawn ssh ${USER}#${MACHINE}
set timeout 10
expect "Password: "
send -s "${PASS}\r"
expect $prompt
send "cmd\r"
expect $prompt
send "cmd1\r"
expect $prompt
send "cmd2\r"
expect $prompt
send "cmd3\r"
expect $prompt
send "cmdn\r"
#cmdn --> is about running script which takes around 4 hours
expect {
timeout { puts "Running ....." #<--- script is nout coming out of loop its running infinitely
exp_continue }
eof {puts "EOF occured"; exit 1}
"\$.*>" { puts "Finished.." ; exit 0}
}
The problem is that your real pattern, "\$.*>", is being matched literally and not as a regular expression. You need to pass the -re flag for that pattern to be matched as a RE, like this (I've used more lines than ; chars as I think it is clearer that way, but YMMV there):
expect {
timeout {
puts "Running ....."
exp_continue
}
eof {
puts "EOF occured"
exit 1
}
-re {\$.*>} {
puts "Finished.."
exit 0
}
}
It's also a really good idea to put regular expressions in {braces} if you can, so backslash sequences (and other Tcl metacharacters) inside don't get substituted. You don't have to… but 99.99% of all cases are better that way.

handling multiple processes in tcl/expect

I am trying to deal with two processes, which have to run simultaneously in expect. However, I keep getting the message that one of those processes does not exist.
Here is a minimal (not) working example (I am not really working with ftp, but thats something that will run for other people):
#!/usr/bin/expect
set spawn_id_bash [spawn /bin/bash]
set spawn_id_ftp [spawn ftp ftp.ccc.de]
send "anonymous\n"
expect {
"*Password*" {
puts "\nftp works"
}
default {
puts "\nftp defaulted"
}
}
set spawn_id $spawn_id_bash
send "uname\n"
expect {
"*Linux*" {
puts "\nbash works"
}
default {
puts "\nbash defaulted"
}
}
Unfortunately, the output is:
[martin#martin linuxhome]$ /tmp/blub.tcl
spawn /bin/bash
spawn ftp ftp.ccc.de
anonymous
Trying 212.201.68.160...
Connected to ftp.ccc.de (212.201.68.160).
220-+-+-+-+-+-+-+-+-+
220-|o|b|s|o|l|e|t|e|
220-+-+-+-+-+-+-+-+-+
220-
220-
220-Please use HTTP instead:
220-
220-* http://cdn.media.ccc.de
220
Name (ftp.ccc.de:martin): 331 Please specify the password.
Password:ftp works
can not find channel named "4648"
while executing
"send "uname\n""
(file "/tmp/blub.tcl" line 19)
I have followed the book "Exploring Expect" while writing this example and I do not see what I do differently.
I also tried using send -i and expect -i without any luck (the error message is gone, but otherwise -i seems to be ignored).
spawn returns the unix process id (PID, an integer), not the spawn_id (a string). For example:
# cat foo.exp
send_user "[spawn -noecho sleep 1] $spawn_id\n"
expect eof
# expect foo.exp
20039 exp6
#
You should write like this:
spawn /bin/bash
set spawn_id_bash $spawn_id
spawn ftp ftp.ccc.de
set spawn_id_ftp $spawn_id
Then you can use expect -i and send -i.

TCL_Expect:: How to append /save "expect_out(buffer)" into a file continuously

My TCL/Expect script:
foreach data $test url $urls {
send "test app website 1\r"
expect "*#"
send "commit \r"
expect "*#"
after 2000
exec cmd.exe /c start iexplore.exe $url
after 3000
exec "C:/WINDOWS/System32/taskkill.exe" /IM IEXPLORE.EXE /F
send "show application stats $data\r"
expect "*#"
expect "?" {
puts [open urllist.txt w] $expect_out(buffer)
}
}
The above script is working fine except storing the output of $expect_out(buffer) into a file.
I want to send $expect_out(buffer) output to "urllist.txt" file continuously.
Please suggest me a way to achieve this.
Thanks in advance.
The following trace procedure writes the value of expect_out (buffer) to a file specific to the spawn id.
proc log_by_tracing {array element op} {
uplevel {
global logfile
set file $logfile($expect_out(spawn_id))
puts -nonewline $file $expect_out(buffer)
}
}
The association between the spawn id and each log file is made in the array logfile which contains a pointer to the log file based on the spawn id. Such an association could be made with the following code when each process is spawned.
spawn <some_app_name_here>
set logfile($spawn_id) [open exp_buffer.log w]
The trace has to be added in the code as
trace variable expect_out(buffer) w log_by_tracing
Internally, the expect command saves the spawn_id element of expect_out after the X, string elements but before the buffer element in the expect_out array. For this reason, the trace must be triggered by the buffer element rather than the spawn_id element.
Note : If you are not bother about much of spawned process or using only one spawned process or no spawned process at all, there is a simple way of doing the same, then it would be much easy.
Consider the following example.
proc log_by_tracing {array element op} {
uplevel {
puts -nonewline $file $expect_out(buffer)
}
}
set file [ open myfile.log w ]
trace variable expect_out(buffer) w log_by_tracing
set timeout 60
expect {
quit { exit 1 }
timeout { exp_continue }
}
If you run the code, whatever you type in console till you type 'quit', the program will run and eventually it will be recorded in the file named 'myfile.log'
You can simply add the proc log_by_tracing and the trace statement into your code. Remember with this simple way, only one instance of expect_out(buffer) can be saved.
Reference : trace, uplevel & EXploring Expect

Problems looping to prompt for another password

I need some help with an EXPECT script please....
I'm trying to automate a login, prior to accessing a load of hosts, and cater for when a user enters a password incorrectly. I am getting the username and password first, and then validating this against a particular host. If the password is invalid, I want to loop round and ask for the username and password again.
I am trying this :-
(preceding few irrelevant lines omitted)
while {1} {
send_user "login as:- "
expect -re "(.*)\n"
send_user "\n"
set user $expect_out(1,string)
stty -echo
send_user "password: "
expect -re "(.*)\n"
set password $expect_out(1,string)
stty echo
set host "some-box.here.there.co.uk"
set hostname "some-box"
set host_unknown 0
spawn ssh $user#$host
while {1} {
expect {
"Password:" {send $password\n
break}
"(yes/no)?" {send "yes\n"}
"Name or service not known" {set host_unknown 1
break}
}
}
if {$host_unknown < 1} {
expect {
"$hostname#" {send "exit\r"
break
}
"Password:" {send \003
expect eof
close $spawn_id
puts "Invalid Username or Password - try again..."
}
}
} elseif {$host_unknown > 0} {
exit 0}
}
puts "dropped out of loop"
And now I can go off and do lots of stuff to lots of boxes .....
This works fine when I enter a valid username or password, and my script goes off and does all the other stuff I want, but when I enter an invalid password I get this :-
Fred#Aserver:~$ ./Ex_Test.sh ALL
login as:- MyID
password: spawn ssh MyID#some-box.here.there.co.uk
Password:
Password:
Invalid Username or Password - try again...
login as:- cannot find channel named "exp6"
while executing "expect -re "(.*)\n""
invoked from within "if {[lindex $argv 1] != ""} {
puts "Too many arguments"
puts "Usage is:- Ex_Test.sh host|ALL"
} elseif {[lindex $argv 0] != ""} {
while {1} {
..."
(file "./Ex_Test.sh" line 3)
Its the line "can not find channel named "exp6" which is really bugging me.
What am I doing wrong? I am reading Exploring Expect (Don Lines) but getting nowhere....
Whenever expect is supposed to wait for some word, it will save the spawn_id for that expect process into expect_out(spawn_id).
As per your code, expect's spawn_id is generated when it encounters
expect -re "(.*)\n"
When user typed something and pressed enter key, it will save the expect's spawn_id. If you have used expect with debugging, you might have seen the following in the debugging output
expect does "" (spawn_id exp0) match regular expression "(.*)\n"
Lets say user typed 'Simon', then the debugging output will be
expect: does "Simon\n" (spawn_id exp0) match regular expression "(.*)\n"? Gate "*\n"? gate=yes re=yes
expect: set expect_out(0,string) "Simon\n"
expect: set expect_out(1,string) "Simon"
expect: set expect_out(spawn_id) "exp0"
expect: set expect_out(buffer) "Simon\n"
As you can see, the expect_out(spawn_id) holds the spawn_id from which it has to expect for values. In this case, the term exp0 pointing the standard input.
If spawn command is used, then as you know, the tcl variable spawn_id holds the reference to the process handle which is known as the spawn handle. We can play around with spawn_id by explicitly setting the process handle and save it for future reference. This is one good part.
As per your code, you are closing the ssh connection when wrong password given with the following code
close $spawn_id
By taking advantage of spawn_id, you are doing this and what you are missing is that setting the expect's process handle back to it's original reference handle. i.e.
While {1} {
###Initial state. Nothing present in spawn_id variable ######
expect "something here"; #### Now exp0 will be created
###some code here ####
##Spawning a process now###
spawn ssh xyz ##At this moment, spawn_id updated
###doing some operations###
###closing ssh with some conditions###
close $spawn_id
##Loop is about to end and still spawn_id has the reference to ssh process
###If anything present in that, expect will assume that might be current process
###so, it will try to expect from that process
}
When the loop executes for the 2nd time, expect will try to expect commands from the spawn_id handle which is nothing but ssh process which is why you are getting the error
can not find channel named "exp6"
Note that the "exp6" is nothing but the spawn handle for the ssh process.
Update :
If some process handle is available in the
spawn_id, then expect will always expect commands from that
process only.
Perhaps you can try something like the following to avoid these.
#Some reference variable
set expect_init_spawn_id 0
while {1} {
if { $expect_spawn_id !=0 } {
#when the loop enters from 2nd iteration,
#spawn_id is explicitly set to initial 'exp0' handle
set spawn_id $expect_init_spawn_id
}
expect -re "(.*)\n"
#Saving the init spawn id of expect process
#And it will have the value as 'exp0'
set expect_init_spawn_id $expect_out(spawn_id)
spawn ssh xyz
##Manipulations here
#closing ssh now
close $spawn_id
}
This is my opinion and it may not be the efficient approach. You can also think of your own logic to handle these problems.
You simply need to store the $spawn_id as a temp variable before a nested expect command, then set the $spawn_id to the temp variable after a nested expect command.
Also, get rid of the while {1} loops. They are not needed because expect behaves like a loop provided you use exp_continue whenever you don't wish to exit. You don't need expect eof nor do you need close $spawn_id. I don't use them in the following example:
#!/usr/bin/expect
set domain [lindex $argv 0];
set timeout 300
spawn ./certbot-add.sh $domain
expect {
"*replace the certificate*" {
send "2\r";
exp_continue;
}
"*_acme-challenge*" {
puts [open output.txt w] $expect_out(buffer)
spawn ./acme-add.sh $domain
set tmp_spawn_id $spawn_id
expect {
"$ "
}
set spawn_id $tmp_spawn_id
send "\r";
exp_continue;
}
"*certificate expires on*" {
puts "Certificate Added!"
}
}

Spawn multiple telnet with tcl and log the output separately

I'm trying to telnet to multiple servers with spawn & i want to log the output of each in a separate files.
If i use the spawn with 'logfile' then, it is logging into a same file. But i want to have it in different files. How to do this?
Expect's logging support (i.e., what the log_file command controls) doesn't let you set different logging destinations for different spawn IDs. This means that the simplest mechanism for doing what you want is to run each of the expect sessions in a separate process, which shouldn't be too hard provided you don't use the interact command. (The idea of needing to interact with multiple remote sessions at once is a bit strange! By the time you've made it sensible by grafting in something like the screen program, you might as well be using separate expect scripts anyway.)
In the simplest case, your outer script can be just:
foreach host {foo.example.com bar.example.com grill.example.com} {
exec expect myExpectScript.tcl $host >#stdout 2>#stderr &
}
(The >#stdout 2>#stderr & does “run in the background with stdout and stderr connected to the usual overall destinations.)
Things get quite a bit more complicated if you want to automatically hand information about between the expect sessions. I hope that simple is good enough…
I have found something from the link
http://www.highwind.se/?p=116
LogScript.tcl
#!/usr/bin/tclsh8.5
package require Expect
proc log_by_trace {array element op} {
uplevel {
global logfile
set file $logfile($expect_out(spawn_id))
puts -nonewline $file $expect_out(buffer)
}
}
array set spawns {}
array set logfile {}
# Spawn 1
spawn ./p1.sh
set spawns(one) $spawn_id
set logfile($spawn_id) [open "./log1" w]
# Spawn 2
spawn ./p2.sh
set spawns(two) $spawn_id
set logfile($spawn_id) [open "./log2" w]
trace add variable expect_out(buffer) write log_by_trace
proc flush_logs {} {
global expect_out
global spawns
set timeout 1
foreach {alias spawn_id} [array get spawns] {
expect {
-re ".+" {exp_continue -continue_timer}
default { }
}
}
}
exit -onexit flush_logs
set timeout 5
expect {
-i $spawns(one) "P1:2" {puts "Spawn1 got 2"; exp_continue}
-i $spawns(two) "P2:2" {puts "spawn2 got 2"; exp_continue}
}
p1.sh
#!/bin/bash
i=0
while sleep 1; do
echo P1:$i
let i++
done
p2.sh
#!/bin/bash
i=0
while sleep 1; do
echo P2:$i
let i++
done
It is working perfectly :)