How to send more than 100 cmd lines - tcl

I have expect (tcl) script for automated task working properly - configuring network devices via telnet/ssh. Most of the cases there is 1,2 or 3 command lines to execute, BUT now I have more then 100 command lines to send via expect. How can I achieved this in smart and good scripting way :)
Because I can join all command lines over 100 to a variable "commandAll" with "\n" and "send" them one after another, but I think it's pretty ugly :) Is there a way without stacking them together to be readable in code or external file ?
#!/usr/bin/expect -f
set timeout 20
set ip_address "[lrange $argv 0 0]"
set hostname "[lrange $argv 1 1]"
set is_ok ""
# Commands
set command1 "configure snmp info 1"
set command2 "configure ntp info 2"
set command3 "configure cdp info 3"
#... more then 100 dif commands like this !
#... more then 100 dif commands like this !
#... more then 100 dif commands like this !
spawn telnet $ip_address
# login & Password & Get enable prompt
#-- snnipped--#
# Commands execution
# command1
expect "$enableprompt" { send "$command1\r# endCmd1\r" ; set is_ok "command1" }
if {$is_ok != "command1"} {
send_user "\n### 9 Exit before executing command1\n" ; exit
}
# command2
expect "#endCmd1" { send "$command2\r# endCmd2\r" ; set is_ok "command2" }
if {$is_ok != "command2"} {
send_user "\n### 9 Exit before executing command2\n" ; exit
}
# command3
expect "#endCmd2" { send "$command3\r\r\r# endCmd3\r" ; set is_ok "command3" }
if {$is_ok != "command3"} {
send_user "\n### 9 Exit before executing command3\n" ; exit
}
p.s. I'm using one approach for cheeking is given cmd line is executed successfully but I'm not certain that is perfect way :D

don't use numbered variables, use a list
set commands {
"configure snmp info 1"
"configure ntp info 2"
"configure cdp info 3"
...
}
If the commands are already in a file, you can read them into a list:
set fh [open commands.file]
set commands [split [read $fh] \n]
close $fh
Then, iterate over them:
expect $prompt
set n 0
foreach cmd $commands {
send "$cmd\r"
expect {
"some error string" {
send_user "command failed: ($n) $cmd"
exit 1
}
timeout {
send_user "command timed out: ($n) $cmd"
exit 1
}
$prompt
}
incr n
}

While yes, you can send long sequences of commands that way, it's usually a bad idea as it makes the overall script very brittle; if anything unexpected happens, the script just keeps on forcing the rest of the script over. Instead, it is better to have a sequence of sends interspersed with expects to check that what you've sent has been accepted. The only real case for sending a very long string over is when you're creating a function or file on the other side that will act as a subprogram that you call; in that case, there's no really meaningful place to stop and check for a prompt half way. But that's the exception.
Note that you can expect two things at once; that's often very helpful as it lets you check for errors directly. I mention this because it is a technique often neglected, yet it allows you to make your script far more robust.
...
send "please run step 41\r"
expect {
-re {error: (.*)} {
puts stderr "a problem happened: $expect_out(1,string)"
exit 1
}
"# " {
# Got the prompt; continue with the next step below
}
}
send "please run step 42\n"
...

Related

need help in eliminating race condition in my code

My code is running infinitely without coming out of loop.
I am calling expect script from shell script, that is working fine,
the problem here is script is not coming out of timout {} loop.
can someone help me in this regard.
spawn ssh ${USER}#${MACHINE}
set timeout 10
expect "Password: "
send -s "${PASS}\r"
expect $prompt
send "cmd\r"
expect $prompt
send "cmd1\r"
expect $prompt
send "cmd2\r"
expect $prompt
send "cmd3\r"
expect $prompt
send "cmdn\r"
#cmdn --> is about running script which takes around 4 hours
expect {
timeout { puts "Running ....." #<--- script is nout coming out of loop its running infinitely
exp_continue }
eof {puts "EOF occured"; exit 1}
"\$.*>" { puts "Finished.." ; exit 0}
}
The problem is that your real pattern, "\$.*>", is being matched literally and not as a regular expression. You need to pass the -re flag for that pattern to be matched as a RE, like this (I've used more lines than ; chars as I think it is clearer that way, but YMMV there):
expect {
timeout {
puts "Running ....."
exp_continue
}
eof {
puts "EOF occured"
exit 1
}
-re {\$.*>} {
puts "Finished.."
exit 0
}
}
It's also a really good idea to put regular expressions in {braces} if you can, so backslash sequences (and other Tcl metacharacters) inside don't get substituted. You don't have to… but 99.99% of all cases are better that way.

Tcl [exec] process leaves zombie if the process forks and exits

I have a case when the Tcl script runs a process, which does fork(), leaves the forked process to run, and then the main process exits. You can try it out simply by running any program that forks to background, for example gvim, provided that it is configured to run in background after execution: set res [exec gvim].
The main process theoretically exits immediately, the child process runs in background, but somehow the main process hangs up, doesn't exit, stays in zombie state (reports as <defunct> in ps output).
In my case the process I'm starting prints something, I want that something and I want that the process exit and I state it done. The problem is that if I spawn the process using open "|gvim" r, then I cannot also recognize the moment when the process has finished. The fd returned by [open] never reports [eof], even when the program turns into zombie. When I try to [read], just to read everything that the process might print, it hangs up completely.
What is more interesting, is that occasionally both the main process and the forked process print something and when I'm trying to read it using [gets], I get both. If I close the descriptor too early, then [close] throws an exception due to broken pipe. Probably that's why [read] never ends.
I need some method to recognize the moment when the main process exits, while this process could have spawned another child process, but this child process may be completely detached and I'm not interested what it does. I want something that the main process prints before exitting and the script should continue its work while the process running in background is also running and I'm not interested what happens to it.
I have a control over the sources of the process I'm starting. Yes, I did signal(SIGCLD, SIG_IGN) before fork() - didn't help.
Tcl clears up zombies from background process calls the next time it calls exec. Since a zombie really doesn't use much resources (just an entry in the process table; there's nothing else there really) there isn't a particular hurry to clean them up.
The problem you were having with the pipeline was that you'd not put it in non-blocking mode. To detect exit of a pipeline, you're best off using a fileevent which will fire when either there's a byte (or more) to read from the pipe or when the other end of the pipe is closed. To distinguish these cases, you have to actually try to read, and that can block if you over-read and you're not in non-blocking mode. However, Tcl makes working with non-blocking mode easy.
set pipeline [open |… "r"]
fileevent $pipeline readable [list handlePipeReadable $pipeline]
fconfigure $pipeline -blocking false
proc handlePipeReadable {pipe} {
if {[gets $pipe line] >= 0} {
# Managed to actually read a line; stored in $line now
} elseif {[eof $pipe]} {
# Pipeline was closed; get exit code, etc.
if {[catch {close $pipe} msg opt]} {
set exitinfo [dict get $opt -errorcode]
} else {
# Successful termination
set exitinfo ""
}
# Stop the waiting in [vwait], below
set ::donepipe $pipeline
} else {
# Partial read; things will be properly buffered up for now...
}
}
vwait ::donepipe
Be aware that using gvim in a pipeline is… rather more complex than usual, as it is an application that users interact with.
You might find it easier to run a simple exec in a separate thread, provided your version of Tcl is thread-enabled and the Thread package is installed. (That ought to be the case if you're using 8.6, but I don't know if that's true.)
package require Thread
set runner [thread::create {
proc run {caller targetVariable args} {
set res [catch {
exec {*}$args
} msg opt]
set callback [list set $targetVariable [list $res $msg $opt]]
thread::send -async $caller $callback
}
thread::wait
}]
proc runInBackground {completionVariable args} {
global runner
thread::send -async $runner [list run [thread::id] $completionVariable {*}$args]
}
runInBackground resultsVar gvim …
# You can do other things at this point
# Wait until the variable is set (by callback); alternatively, use a variable trace
vwait resultsVar
# Process the results to extract the sense
lassign $resultsVar res msg opt
puts "code: $res"
puts "output: $msg"
puts "status dictionary: $opt"
For all that, for an editor like gvim I'd actually expect it to be run in the foreground (which doesn't require anything like as much complexity) since only one of them can really interact with a particular terminal at once.
Your daemon can also call setsid() and setpgrp() to start a new session and to detach from the process group. But these don't help with your problem either.
You will have to do some process management:
#!/usr/bin/tclsh
proc waitpid {pid} {
set rc [catch {exec -- kill -0 $pid}]
while { $rc == 0 } {
set ::waitflag 0
after 100 [list set ::waitflag 1]
vwait ::waitflag
set rc [catch {exec -- kill -0 $pid}]
}
}
set pid [exec ./t1 &]
waitpid $pid
puts "exit tcl"
exit
Edit: Another unreasonable answer
If the forked child process closes the open channels, Tcl will not wait on it.
Test program:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <signal.h>
int
main (int argc, char *argv [])
{
int pid;
FILE *o;
signal (SIGCHLD, SIG_IGN);
pid = fork ();
if (pid == 0) {
/* should also call setsid() and setpgrp() to daemonize */
printf ("child\n");
fclose (stdout);
fclose (stderr);
sleep (10);
o = fopen ("/dev/tty", "w");
fprintf (o, "child exit\n");
fclose (o);
} else {
printf ("parent\n");
sleep (2);
}
printf ("t1 exit %d\n", pid);
return 0;
}
Test Tcl program:
#!/usr/bin/tclsh
puts [exec ./t1]
puts "exit tcl"
At first you say:
I need some method to recognize the moment when the main process exits, while this process could have spawned another child process, but this child process may be completely detached and I'm not interested what it does.
later on you say:
If the forked child process closes the open channels, Tcl will not wait on it.
these are two contradictory statements. One one hand you are only interested in the parent process and on the other whether or not the child has finished even thought you also state you aren't interested in child processes that have detached. Last I heard forking and closing the childs copies of the parents stdin,stdout and stderr is detaching (i.e.daemonizing the child process ). I wrote this quick program to run the above included simple c program and as expected tcl knows nothing of the child process. I called the compiled version of the program /tmp/compile/chuck. I did not have gvim so I used emacs but as emacs does not generate text I wrap the exec in its own tcl script and exec that. In both cases, the parent process is waited for and eof is detected. When the parent exits the Runner::getData runs and the clean up is evaluated.
#!/bin/sh
exec /opt/usr8.6.3/bin/tclsh8.6 "$0" ${1+"$#"}
namespace eval Runner {
variable close
variable watch
variable lastpid ""
array set close {}
array set watch {}
proc run { program { message "" } } {
variable watch
variable close
variable lastpid
if { $message ne "" } {
set fname "/tmp/[lindex $program 0 ]-[pid].tcl"
set out [ open $fname "w" ]
puts $out "#![info nameofexecutable]"
puts $out " catch { exec $program } err "
puts $out "puts \"\$err\n$message\""
close $out
file attributes $fname -permissions 00777
set fd [ open "|$fname " "r" ]
set close([pid $fd]) "file delete -force $fname "
} else {
set fd [ open "|$program" "r" ]
set close([pid $fd]) "puts \"cleanup\""
}
fconfigure $fd -blocking 0 -buffering none
fileevent $fd readable [ list Runner::getData [ pid $fd ] $fd ]
}
proc getData { pid chan } {
variable watch
variable close
variable lastpid
set data [read $chan]
append watch($pid) "$data"
if {[eof $chan]} {
catch { close $chan }
eval $close($pid) ; # cleanup
set lastpid $pid
}
}
}
Runner::run /tmp/compile/chuck ""
Runner::run emacs " Emacs complete"
while { 1 } {
vwait Runner::lastpid
set p $Runner::lastpid
catch { exec ps -ef | grep chuck } output
puts "program with pid $p just ended"
puts "$Runner::watch($p)"
puts " processes that match chuck "
puts "$output"
}
Output :
note I exited out of emacs after the child reported that it was exiting.
[user1#linuxrocks workspace]$ ./test.tcl
cleanup
program with pid 27667 just ended
child
parent
t1 exit 27670
processes that match chuck avahi 936 1 0 2016 ?
00:04:35 avahi-daemon: running [linuxrocks.local] admin 27992 1 0
19:37 pts/0 00:00:00 /tmp/compile/chuck admin 28006 27988 0
19:37 pts/0 00:00:00 grep chuck
child exit
program with pid 27669 just ended
Emacs complete
Ok, I found the solution after a long discussion here:
https://groups.google.com/forum/#!topic/comp.lang.tcl/rtaTOC95NJ0
The below script demonstrates how this problem can be solved:
#!/usr/bin/tclsh
lassign [chan pipe] input output
chan configure $input -blocking no -buffering line ;# just for a case :)
puts "Running $argv..."
set ret [exec {*}$argv 2>#stderr >#$output]
puts "Waiting for finished process..."
set line [gets $input]
puts "FIRST LINE: $line"
puts "DONE. PROCESSES:"
puts [exec ps -ef | grep [lindex $argv 0]]
puts "EXITING."
The only problem that remains is that there's still no possibility to know that the process has exited, however the next [exec] (in this particular case probably the [exec ps...] command did this) cleans up the zombie (No universal method for that - the best you can do on POSIX systems is [exec /bin/true]). In my case it was enough that I get one line that the parent process had to print, after which I can simply "let it go".
Still, it would be nice if [exec] can return me somehow the PID of the first process and there's a standard [wait] command that can block until the process exits or check its running state (this command is currently available in TclX).
Note that [chan pipe] is available only in Tcl 8.6, you can use [pipe] from TclX alternatively.

Problems looping to prompt for another password

I need some help with an EXPECT script please....
I'm trying to automate a login, prior to accessing a load of hosts, and cater for when a user enters a password incorrectly. I am getting the username and password first, and then validating this against a particular host. If the password is invalid, I want to loop round and ask for the username and password again.
I am trying this :-
(preceding few irrelevant lines omitted)
while {1} {
send_user "login as:- "
expect -re "(.*)\n"
send_user "\n"
set user $expect_out(1,string)
stty -echo
send_user "password: "
expect -re "(.*)\n"
set password $expect_out(1,string)
stty echo
set host "some-box.here.there.co.uk"
set hostname "some-box"
set host_unknown 0
spawn ssh $user#$host
while {1} {
expect {
"Password:" {send $password\n
break}
"(yes/no)?" {send "yes\n"}
"Name or service not known" {set host_unknown 1
break}
}
}
if {$host_unknown < 1} {
expect {
"$hostname#" {send "exit\r"
break
}
"Password:" {send \003
expect eof
close $spawn_id
puts "Invalid Username or Password - try again..."
}
}
} elseif {$host_unknown > 0} {
exit 0}
}
puts "dropped out of loop"
And now I can go off and do lots of stuff to lots of boxes .....
This works fine when I enter a valid username or password, and my script goes off and does all the other stuff I want, but when I enter an invalid password I get this :-
Fred#Aserver:~$ ./Ex_Test.sh ALL
login as:- MyID
password: spawn ssh MyID#some-box.here.there.co.uk
Password:
Password:
Invalid Username or Password - try again...
login as:- cannot find channel named "exp6"
while executing "expect -re "(.*)\n""
invoked from within "if {[lindex $argv 1] != ""} {
puts "Too many arguments"
puts "Usage is:- Ex_Test.sh host|ALL"
} elseif {[lindex $argv 0] != ""} {
while {1} {
..."
(file "./Ex_Test.sh" line 3)
Its the line "can not find channel named "exp6" which is really bugging me.
What am I doing wrong? I am reading Exploring Expect (Don Lines) but getting nowhere....
Whenever expect is supposed to wait for some word, it will save the spawn_id for that expect process into expect_out(spawn_id).
As per your code, expect's spawn_id is generated when it encounters
expect -re "(.*)\n"
When user typed something and pressed enter key, it will save the expect's spawn_id. If you have used expect with debugging, you might have seen the following in the debugging output
expect does "" (spawn_id exp0) match regular expression "(.*)\n"
Lets say user typed 'Simon', then the debugging output will be
expect: does "Simon\n" (spawn_id exp0) match regular expression "(.*)\n"? Gate "*\n"? gate=yes re=yes
expect: set expect_out(0,string) "Simon\n"
expect: set expect_out(1,string) "Simon"
expect: set expect_out(spawn_id) "exp0"
expect: set expect_out(buffer) "Simon\n"
As you can see, the expect_out(spawn_id) holds the spawn_id from which it has to expect for values. In this case, the term exp0 pointing the standard input.
If spawn command is used, then as you know, the tcl variable spawn_id holds the reference to the process handle which is known as the spawn handle. We can play around with spawn_id by explicitly setting the process handle and save it for future reference. This is one good part.
As per your code, you are closing the ssh connection when wrong password given with the following code
close $spawn_id
By taking advantage of spawn_id, you are doing this and what you are missing is that setting the expect's process handle back to it's original reference handle. i.e.
While {1} {
###Initial state. Nothing present in spawn_id variable ######
expect "something here"; #### Now exp0 will be created
###some code here ####
##Spawning a process now###
spawn ssh xyz ##At this moment, spawn_id updated
###doing some operations###
###closing ssh with some conditions###
close $spawn_id
##Loop is about to end and still spawn_id has the reference to ssh process
###If anything present in that, expect will assume that might be current process
###so, it will try to expect from that process
}
When the loop executes for the 2nd time, expect will try to expect commands from the spawn_id handle which is nothing but ssh process which is why you are getting the error
can not find channel named "exp6"
Note that the "exp6" is nothing but the spawn handle for the ssh process.
Update :
If some process handle is available in the
spawn_id, then expect will always expect commands from that
process only.
Perhaps you can try something like the following to avoid these.
#Some reference variable
set expect_init_spawn_id 0
while {1} {
if { $expect_spawn_id !=0 } {
#when the loop enters from 2nd iteration,
#spawn_id is explicitly set to initial 'exp0' handle
set spawn_id $expect_init_spawn_id
}
expect -re "(.*)\n"
#Saving the init spawn id of expect process
#And it will have the value as 'exp0'
set expect_init_spawn_id $expect_out(spawn_id)
spawn ssh xyz
##Manipulations here
#closing ssh now
close $spawn_id
}
This is my opinion and it may not be the efficient approach. You can also think of your own logic to handle these problems.
You simply need to store the $spawn_id as a temp variable before a nested expect command, then set the $spawn_id to the temp variable after a nested expect command.
Also, get rid of the while {1} loops. They are not needed because expect behaves like a loop provided you use exp_continue whenever you don't wish to exit. You don't need expect eof nor do you need close $spawn_id. I don't use them in the following example:
#!/usr/bin/expect
set domain [lindex $argv 0];
set timeout 300
spawn ./certbot-add.sh $domain
expect {
"*replace the certificate*" {
send "2\r";
exp_continue;
}
"*_acme-challenge*" {
puts [open output.txt w] $expect_out(buffer)
spawn ./acme-add.sh $domain
set tmp_spawn_id $spawn_id
expect {
"$ "
}
set spawn_id $tmp_spawn_id
send "\r";
exp_continue;
}
"*certificate expires on*" {
puts "Certificate Added!"
}
}

Spawn multiple telnet with tcl and log the output separately

I'm trying to telnet to multiple servers with spawn & i want to log the output of each in a separate files.
If i use the spawn with 'logfile' then, it is logging into a same file. But i want to have it in different files. How to do this?
Expect's logging support (i.e., what the log_file command controls) doesn't let you set different logging destinations for different spawn IDs. This means that the simplest mechanism for doing what you want is to run each of the expect sessions in a separate process, which shouldn't be too hard provided you don't use the interact command. (The idea of needing to interact with multiple remote sessions at once is a bit strange! By the time you've made it sensible by grafting in something like the screen program, you might as well be using separate expect scripts anyway.)
In the simplest case, your outer script can be just:
foreach host {foo.example.com bar.example.com grill.example.com} {
exec expect myExpectScript.tcl $host >#stdout 2>#stderr &
}
(The >#stdout 2>#stderr & does “run in the background with stdout and stderr connected to the usual overall destinations.)
Things get quite a bit more complicated if you want to automatically hand information about between the expect sessions. I hope that simple is good enough…
I have found something from the link
http://www.highwind.se/?p=116
LogScript.tcl
#!/usr/bin/tclsh8.5
package require Expect
proc log_by_trace {array element op} {
uplevel {
global logfile
set file $logfile($expect_out(spawn_id))
puts -nonewline $file $expect_out(buffer)
}
}
array set spawns {}
array set logfile {}
# Spawn 1
spawn ./p1.sh
set spawns(one) $spawn_id
set logfile($spawn_id) [open "./log1" w]
# Spawn 2
spawn ./p2.sh
set spawns(two) $spawn_id
set logfile($spawn_id) [open "./log2" w]
trace add variable expect_out(buffer) write log_by_trace
proc flush_logs {} {
global expect_out
global spawns
set timeout 1
foreach {alias spawn_id} [array get spawns] {
expect {
-re ".+" {exp_continue -continue_timer}
default { }
}
}
}
exit -onexit flush_logs
set timeout 5
expect {
-i $spawns(one) "P1:2" {puts "Spawn1 got 2"; exp_continue}
-i $spawns(two) "P2:2" {puts "spawn2 got 2"; exp_continue}
}
p1.sh
#!/bin/bash
i=0
while sleep 1; do
echo P1:$i
let i++
done
p2.sh
#!/bin/bash
i=0
while sleep 1; do
echo P2:$i
let i++
done
It is working perfectly :)

Timeout doesn't work with '-re' flag in expect script

I'm trying to get an expect script to work, and when I use the -re flag (to invoke regular expression parsing), the 'timeout' keyword seems to no longer work. When the following script is run, I get the message 'timed out at step 1', then 'starting step 2' and then it times out but does NOT print the 'timed out at step 2' I just get a new prompt.
Ideas?
#!/usr/bin/expect --
spawn $env(SHELL)
match_max 100000
set timeout 2
send "echo This will print timed out\r"
expect {
timeout { puts "timed out at step 1" }
"foo " { puts "it said foo at step 1"}
}
puts "Starting test two\r"
send "echo This will not print timed out\r"
expect -re {
timeout { puts "timed out at step 2" ; exit }
"foo " { puts "it said foo at step 2"}
}
Figured it out:
expect {
timeout { puts "timed out at step 2" ; exit }
-re "foo " { puts "it said foo at step 2"}
}
Yes, the "-re" flag as it appears in your question will apply to every pattern in the expect command. So the "timeout" pattern becomes "-re timeout", losing its specialness.