Why does this for loop not break out following timeout? - tcl

If the sent ssh command times out, i need it to move to next address in list
It gets to where I send the pw, Stuff, and I need it to break out of that if it doesn't
get in. It just hangs. Why?
foreach address $perAddress {
set timeout 10
send "ssh $address user someone\r"
expect "word:"
send "Stuff\r"
expect {
"prompt*#" {continue}
timeout {break}
}
set perLine [split $fileData "\n"]
set timeout 600
foreach line $perLine {
send "$line\r"
expect "# "
}
send "exit\r"
expect "> "
}

The expect command swallows break and continue conditions (as it thinks of itself internally as a loop). This means that you'd need to do:
set timedOut 0
expect {
"prompt*#" {
# Do nothing here
}
timeout {
set timedOut 1
}
}
if {$timedOut} break
However, it is probably easier to just refactor that code so that the whole interaction with a particular address is in a procedure, and then use return:
proc talkToHost {address} {
global fileData
set timeout 10
send "ssh $address user someone\r"
expect "word:"
send "Stuff\r"
expect {
"prompt*#" {continue}
timeout {return}
}
set perLine [split $fileData "\n"]
set timeout 600
foreach line $perLine {
send "$line\r"
expect "# "
}
send "exit\r"
expect "> "
}
foreach address $perAddress {
talkToHost $address
}
I find it much easier to then focus on how to make things work correctly for one host independently of making them work across a whole load of them. (For example, you don't clean up the connection before going onto the next when there's a timeout; this leaks a virtual terminal until the overall script exits.)

Related

need help in eliminating race condition in my code

My code is running infinitely without coming out of loop.
I am calling expect script from shell script, that is working fine,
the problem here is script is not coming out of timout {} loop.
can someone help me in this regard.
spawn ssh ${USER}#${MACHINE}
set timeout 10
expect "Password: "
send -s "${PASS}\r"
expect $prompt
send "cmd\r"
expect $prompt
send "cmd1\r"
expect $prompt
send "cmd2\r"
expect $prompt
send "cmd3\r"
expect $prompt
send "cmdn\r"
#cmdn --> is about running script which takes around 4 hours
expect {
timeout { puts "Running ....." #<--- script is nout coming out of loop its running infinitely
exp_continue }
eof {puts "EOF occured"; exit 1}
"\$.*>" { puts "Finished.." ; exit 0}
}
The problem is that your real pattern, "\$.*>", is being matched literally and not as a regular expression. You need to pass the -re flag for that pattern to be matched as a RE, like this (I've used more lines than ; chars as I think it is clearer that way, but YMMV there):
expect {
timeout {
puts "Running ....."
exp_continue
}
eof {
puts "EOF occured"
exit 1
}
-re {\$.*>} {
puts "Finished.."
exit 0
}
}
It's also a really good idea to put regular expressions in {braces} if you can, so backslash sequences (and other Tcl metacharacters) inside don't get substituted. You don't have to… but 99.99% of all cases are better that way.

How to send more than 100 cmd lines

I have expect (tcl) script for automated task working properly - configuring network devices via telnet/ssh. Most of the cases there is 1,2 or 3 command lines to execute, BUT now I have more then 100 command lines to send via expect. How can I achieved this in smart and good scripting way :)
Because I can join all command lines over 100 to a variable "commandAll" with "\n" and "send" them one after another, but I think it's pretty ugly :) Is there a way without stacking them together to be readable in code or external file ?
#!/usr/bin/expect -f
set timeout 20
set ip_address "[lrange $argv 0 0]"
set hostname "[lrange $argv 1 1]"
set is_ok ""
# Commands
set command1 "configure snmp info 1"
set command2 "configure ntp info 2"
set command3 "configure cdp info 3"
#... more then 100 dif commands like this !
#... more then 100 dif commands like this !
#... more then 100 dif commands like this !
spawn telnet $ip_address
# login & Password & Get enable prompt
#-- snnipped--#
# Commands execution
# command1
expect "$enableprompt" { send "$command1\r# endCmd1\r" ; set is_ok "command1" }
if {$is_ok != "command1"} {
send_user "\n### 9 Exit before executing command1\n" ; exit
}
# command2
expect "#endCmd1" { send "$command2\r# endCmd2\r" ; set is_ok "command2" }
if {$is_ok != "command2"} {
send_user "\n### 9 Exit before executing command2\n" ; exit
}
# command3
expect "#endCmd2" { send "$command3\r\r\r# endCmd3\r" ; set is_ok "command3" }
if {$is_ok != "command3"} {
send_user "\n### 9 Exit before executing command3\n" ; exit
}
p.s. I'm using one approach for cheeking is given cmd line is executed successfully but I'm not certain that is perfect way :D
don't use numbered variables, use a list
set commands {
"configure snmp info 1"
"configure ntp info 2"
"configure cdp info 3"
...
}
If the commands are already in a file, you can read them into a list:
set fh [open commands.file]
set commands [split [read $fh] \n]
close $fh
Then, iterate over them:
expect $prompt
set n 0
foreach cmd $commands {
send "$cmd\r"
expect {
"some error string" {
send_user "command failed: ($n) $cmd"
exit 1
}
timeout {
send_user "command timed out: ($n) $cmd"
exit 1
}
$prompt
}
incr n
}
While yes, you can send long sequences of commands that way, it's usually a bad idea as it makes the overall script very brittle; if anything unexpected happens, the script just keeps on forcing the rest of the script over. Instead, it is better to have a sequence of sends interspersed with expects to check that what you've sent has been accepted. The only real case for sending a very long string over is when you're creating a function or file on the other side that will act as a subprogram that you call; in that case, there's no really meaningful place to stop and check for a prompt half way. But that's the exception.
Note that you can expect two things at once; that's often very helpful as it lets you check for errors directly. I mention this because it is a technique often neglected, yet it allows you to make your script far more robust.
...
send "please run step 41\r"
expect {
-re {error: (.*)} {
puts stderr "a problem happened: $expect_out(1,string)"
exit 1
}
"# " {
# Got the prompt; continue with the next step below
}
}
send "please run step 42\n"
...

"invalid command name" in expect script

I have the following code for listening at a serial port:
set timeout -1
log_user 0
set port [lindex $argv 0]
spawn /usr/bin/cu -l $port
proc receive { str } {
set timeout 5
expect
{
timeout { send_user "\nDone\n"; }
}
set timeout -1
}
expect {
"XXXXXX\r" { receive $expect_out(0,string); exp_continue; }
}
Why does this give a
invalid command name "
error after the 5 second timeout elapses in the procedure? Are the nested expects OK?
The problem is this:
expect
{
timeout { send_user "\nDone\n"; }
}
Newlines matter in Tcl scripts! When you use expect on its own, it just waits for the timeout (and processes any background expecting you've set up; none in this case). The next line, with what you're waiting for, is interpreted as a command all of its own with a very strange name (including newlines, spaces, etc.) which is not at all what you want.
What you actually want to do is this:
expect {
timeout { send_user "\nDone\n"; }
}
By putting the brace on the same line as the expect, you'll get the behaviour that you (presumably) anticipate.

Spawn multiple telnet with tcl and log the output separately

I'm trying to telnet to multiple servers with spawn & i want to log the output of each in a separate files.
If i use the spawn with 'logfile' then, it is logging into a same file. But i want to have it in different files. How to do this?
Expect's logging support (i.e., what the log_file command controls) doesn't let you set different logging destinations for different spawn IDs. This means that the simplest mechanism for doing what you want is to run each of the expect sessions in a separate process, which shouldn't be too hard provided you don't use the interact command. (The idea of needing to interact with multiple remote sessions at once is a bit strange! By the time you've made it sensible by grafting in something like the screen program, you might as well be using separate expect scripts anyway.)
In the simplest case, your outer script can be just:
foreach host {foo.example.com bar.example.com grill.example.com} {
exec expect myExpectScript.tcl $host >#stdout 2>#stderr &
}
(The >#stdout 2>#stderr & does “run in the background with stdout and stderr connected to the usual overall destinations.)
Things get quite a bit more complicated if you want to automatically hand information about between the expect sessions. I hope that simple is good enough…
I have found something from the link
http://www.highwind.se/?p=116
LogScript.tcl
#!/usr/bin/tclsh8.5
package require Expect
proc log_by_trace {array element op} {
uplevel {
global logfile
set file $logfile($expect_out(spawn_id))
puts -nonewline $file $expect_out(buffer)
}
}
array set spawns {}
array set logfile {}
# Spawn 1
spawn ./p1.sh
set spawns(one) $spawn_id
set logfile($spawn_id) [open "./log1" w]
# Spawn 2
spawn ./p2.sh
set spawns(two) $spawn_id
set logfile($spawn_id) [open "./log2" w]
trace add variable expect_out(buffer) write log_by_trace
proc flush_logs {} {
global expect_out
global spawns
set timeout 1
foreach {alias spawn_id} [array get spawns] {
expect {
-re ".+" {exp_continue -continue_timer}
default { }
}
}
}
exit -onexit flush_logs
set timeout 5
expect {
-i $spawns(one) "P1:2" {puts "Spawn1 got 2"; exp_continue}
-i $spawns(two) "P2:2" {puts "spawn2 got 2"; exp_continue}
}
p1.sh
#!/bin/bash
i=0
while sleep 1; do
echo P1:$i
let i++
done
p2.sh
#!/bin/bash
i=0
while sleep 1; do
echo P2:$i
let i++
done
It is working perfectly :)

How to search for multiple patterns stored in a list until all items are found or a set amount of time has passed

I'm making a simple expect script that will monitor the output of tcpdump for a list of multicast addresses. I want to know if packets are received or not from each multicast address in the list before expect times out.
I have a working solution, but it is inefficient and I believe I'm not utilizing the full power of expect and tcl. Anyway here is my current script:
set multicast_list {225.0.0.1 225.0.0.2 225.0.0.3}
send "tcpdump -i ixp1\r"
# If tcpdump does not start, unzip it and run it again
expect {
"tcpdump: listening on ixp1" {}
"sh: tcpdump: not found" {
send "gunzip /usr/sbin/tcpdump.gz\r"
expect "# "
send "tcpdump -i ixp1\r"
exp_continue
}
}
# Set timeout to the number of seconds expect will check for ip addresses
set timeout 30
set found [list]
set not_found [list]
foreach ip $multicast_list {
expect {
"> $ip" { lappend found "$ip" }
timeout { lappend not_found "$ip" }
}
}
set timeout 5
# Send ^c to stop tcpdump
send -- "\003"
expect "# "
So as you can see the script will look for each ip address one at a time and if the ip is seen it will add it to the list of found addresses. If expect times out it will add the address to the not_found list and search for the next address.
Now back to my question: Is there a way in which I can monitor tcpdump for all IP addresses simultaneously over a given amount of time. If the address were to be found I want to add it to the list of found addresses and ideally stop expecting it (this may not be possible, I'm not sure). The key is I need the script to monitor for all IP's in the list in parallel. I can't hard code each address because they will be different each time and the amount of addresses I am looking for will also vary. I could really use some help from an expect guru lol.
Thank You!
That's an interesting problem. The easiest way is probably to do runtime generation of the core of the expect script. Fortunately, Tcl's very good at that sort of thing. (Note: I'm assuming that IP addresses are all IPv4 addresses and consist of just numbers and periods; if it was a general string being inserted, I'd have to be a little more careful.)
set timeout 30
set found [list]
set not_found [list]
# Generate the timeout clause as a normal literal
set expbody {
timeout {
set not_found [array names waiting]
unset waiting
}
}
foreach ip $multicast_list {
set waiting($ip) "dummy"
# Generate the per-ip clause as a multi-line string; beware a few backslashes
append expbody "\"> $ip\" {
lappend found $ip
unset waiting($ip)
if {\[array size waiting\]} exp_continue
}\n"
}
# Feed into expect; it's none-the-wiser that it was runtime-generated
expect $expbody
set timeout 5
# Send ^c to stop tcpdump
send -- "\003"
expect "# "
You might want to puts $expbody the first few times, just so you can be sure that it is doing the right thing.
Here is my finished script. It uses the same code from Donal's solution, but I added a few checks to fix some issues that weren't accounted for.
set multicast_list {225.0.0.1 225.0.0.2 225.0.0.3}
set tcpdump_timeout 10
spawn /bin/bash
expect "] "
# Create the runtime-generated expbody to use later
# Generate the timeout clause as a normal literal
set expbody {
timeout {
set not_found [array names waiting]
unset waiting
}
}
foreach ip $multicast_list {
set waiting($ip) "dummy"
# Generate the per-ip clause as a multi-line string; beware a few backslashes
append expbody "\"> $ip\" {
set currentTime \[clock seconds\]
if { \$currentTime < \$endTime } {
if { \[ info exists waiting($ip) \] } {
lappend found $ip
unset waiting($ip)
}
if {\[array size waiting\]} exp_continue
}
}\n"
}
# Set expect timeout and create empty lists for tcpdump results
set timeout $tcpdump_timeout
set found [list]
set not_found [list]
# Start tcpdump
send "tcpdump -i ixp1\r"
expect "tcpdump: listening on ixp1"
# Get the time to stop tcpdump
set endTime [ expr [clock seconds] + $tcpdump_timeout ]
# Feed expbody into expect; it's none-the-wiser that it was runtime-generated
expect $expbody
set not_found [array names waiting]
unset waiting
# Send ^c to stop tcpdump
send -- "\003"
expect "# "