It's not clear to me if send_user is the same as puts. Every time I want to send an informative message to the user, I would wonder which one I should use. From Expect's man page, it seems like send_user is the same as puts but then what is send_user used for?
Another difference I just discovered between the two, apart from the newline thing, is that if you're using log_file in your Expect script, statements sent via send_user will make it into the logfile, whereas statements sent with puts do not. If you're automating some Expect scripts which aren't being run on your actual console, that can make a big difference.
The main difference is that puts automatically appends a newline and
send_user does not. In this regard, puts -nonewline is more analagous
to send_user.
send_user also "inherits" some options from expect's send, such as -s
and -h (check the expect man page for details). See
http://99-bottles-of-beer.net/language-expect-249.html
for a usage of the -h flag.
I cannot speak to how they're implemented at the C-level.
Related
So I'm getting a strange issue when trying to send an email from my company's local mail server using Tcl. The problem is I've written code that works, but it doesn't work as soon as I wrap it with TclApp. I believe I've included all the necessary packages.
Code:
package require smtp;
package require mime;
package require tls;
set body "Hello World, I am sending an email! Hear me roar.";
#mime info
set token [mime::initialize -canonical text/plain -string $body];
#mail options
set opts {};
#MAIL SERVER + PORT
lappend opts -servers "my_server";
lappend opts -ports 25;
tls::init -tls1 1;
lappend opts -username "someEmail#example.com";
lappend opts -password "somePasswordExample";
#SUBJECT + FROM + TO
lappend opts -header [list "Subject" "This is a test e-mail, beware!!"];
lappend opts -header [list "From" "theFromEmail#example.com"];
lappend opts -header [list "To" "theToEmail#yahoo.com"];
if {[catch {
smtp::sendmessage $token \
{*}$opts \
-queue false \
-atleastone false \
-usetls false \
-debug 1;
mime::finalize $token;
} msg]} {
set out [open "error_log.txt" w];
puts $out $msg;
close $out;
}
puts "Hit enter to continue...";
gets stdin;
exit;
This works when I run it and I successfully get the email sent.
But when I wrap it, it doesn't. Here's the output after wrapping and executing the program:
For whatever reason, wrapping with TclApp makes my program fail to authenticate.
*Update: I've decided to try to wrap the script with Freewrap to see if I get the same problem and amazingly I do not. Script works if not wrapped or if wrapped by Freewrap, but not TclApp; is this a bug in TclApp or am I simply missing something obvious?
It seems that this is an old problem (last post): https://community.activestate.com/forum/wrapping-package-tls151
Investigate whether the wrapped version has the same .dll
dependencies as in the article you linked.
Check in the TCL Wiki to see whether this is a known issue ( TCL
Wiki ).
Are you initializing your environment properly when wrapped ( TCL
App Initialization )?
I've found the solution to my problem and solved it! Hopefully this answer will help anyone else who ran into the same issue.
I contacted ActiveState and they were able to point me in the right direction - this link to a thread on the ActiveState forums goes over a similar issue.
I've decided to run the prefix file I've been using to wrap my programs in TclApp - I am using base-tk8.6-thread-win32-ix86. Running this opens up a console/Tcl interpreter which allowed me to try sourcing my Tcl e-mail script manually and I found that the prefix file/interpreter by itself was completely unaware of the location of any package on my system and that I needed to lappend to the auto_path the location of the packages I wanted. Moreover I found from the thread that the email script I was using required other dependencies I had not wrapped in. Namely sha1, SASL, and otp among their own dependencies. When I got these dependencies my script worked through the base-tk8.6-thread-win32-ix86 interpreter and subsequantly through my wrapped program.
Another clue that these dependencies were necessary after doing some research was that my script would give me a 530: 5.5.1 Authentication Required. I later learned SASL fixed this - but I don't know anything about networking so I wouldn't have guessed it.
The other thing I learned was there is a distinct difference between teacup get and teacup install which is why I wasn't installing Tcl packages correctly and TclApp was unaware of SASL, sha1, or otp.
What I don't understand, and perhaps someone with more in depth knowledge about these programs can tell me, how does Wish86 know about a script's package dependencies and know where to find them?? It wasn't until I tried looking for the packages in TclApp that I realized I didn't even have sha1, SASL, etc. on my system nor did I included their package require commands at the top of my script, yet Wish86 could execute my script perfectly. The frustrating thing about this is it is difficult to know whether or not you've met all dependency requirements for systems that don't have a magical Wish86 interpreter that just makes everything work (lol.)
After looking through the end of the thread, I found there is a discussion on the depth TclApp goes to look for hard and soft dependencies in projects.
EDIT: Added more information.
I work with an embedded system which I can access through a serial debug port for debugging. I want to use it's cli interface that can be accessed with telnet localhost in debug console (even before the system is fully up) through expect. The problem is, the cli interface kicks me out at random times with Connection closed by foreign host. near startup (this behavior cannot be changed in the system).
This is the background, my question is that is there any method or trick in expect with which I can set a pattern-action pair permanently for all expect command (in some specific scope)? I would like to setup something like this:
expect "Connection closed by foreign host." { error "cli closed" }
And use this in all the expect command in all my tcl proc that handles cli stuff, then I would call my proc with catch from the main program, and could handle the disconnection. If I can't set this pattern-action pair permanently, I have to include this in all my expect command, which would be really tedious (or use some kind of state instead of multiple expect command, which would be even more tedious..)
Any other idea to work around this is welcome too!
There's an expect_before command: The patterns and actions defined in expect_before are "imported" into every subsequest expect command. So, you want:
expect_before "Connection closed by foreign host." { error "cli closed" }
I have a expect script which does the following:
Spawns a.sh -> a.sh waits for user input
Spawns b.sh -> runs and finishes
I now want to send input to a.sh but having difficulties getting this working.
Below is my expect script
#!/usr/bin/expect
spawn "./a.sh"
set a $spawn_id
expect "enter"
spawn "./b.sh"
expect eof
send -i $a "test\r"
a.sh is
read -p "enter " group
echo $group
echo $group to file > file.txt
and b.sh is
echo i am b
sleep 5
echo xx
Basically, expect will work with two feasible commands such as send and expect. In this case, if send is used, then it is mandatory to have expect (in most of the cases) afterwards. (while the vice-versa is not required to be mandatory)
This is because without that we will be missing out what is happening in the spawned process as expect will assume that you simply need to send one string value and not expecting anything else from the session.
After the following code
send "ian\n"
we have to make the expect to wait for something afterwards. You have used expect eof for the spawned process a.sh . In a same way, it can be added for the b.sh spawned process as well.
"Well, then why interact worked ?". I heard you.
They both work in the same manner except that interact will expect some input from the user as an interactive session. It obviously expect from you. But, your current shell script is not designed to get input and it will eventually quit since there is no much further code in the script and which is why you are seeing the proper output. Or, even having an expect code alone (even without eof) can do the trick.
Have a look at the expect's man page to know more.
I've just got this working using
spawn "./a.sh"
set a $spawn_id
expect "enter"
spawn "./b.sh"
expect eof
set spawn_id $a
send "ian\n"
interact
My question now is why do you need interact at the end?
I have a Perl script which submits a bunch of array jobs to SGE. I want all the jobs to be run in parallel to save me time, and the script to wait for them all to finish, then go on to the next processing step, which integrates information from all SGE output files and produces the final output.
In order to send all the jobs into the background and then wait, I use Parallel::ForkManager and a loop:
$fork_manager = new Parallel::ForkManager(#as);
# #as: Max nb of processes to run simultaneously
for $a (#as) {
$fork_manager->start and next; # Starts the child process
system "qsub <qsub_options> ./script.plx";
$fork_manager->finish; # Terminates the child process
}
$fork_manager->wait_all_children;
<next processing step, local>
In order for the "waiting" part to work, however, I have had to add "-sync yes" to the qsub options. But as a "side effect" of this, SGE prints the exit code for each task in each array job, and since there are many jobs and the single tasks are light, it basically renders my shell unusable due to all those interupting messages while the qsub jobs are running.
How can I get rid of those messages? If anything, I would be interested in checking qsub's exit code for the jobs (so I can check everything went ok before the next step), but not in one exit code for each task (I log the tasks' error via option -e anyway in case I need it).
The simplest solution would be to redirect the output from qsub somewhere, i.e.
system("qsub <qsub options> ./script.plx >/dev/null 2>&1");
but this masks errors that you might want to see. Alternatively, you can use open() to start the subprocess and read it's output, only printing something if the subprocess generates an error.
I do have an alternate solution for you, though. You could submit the jobs to SGE without -sync y, and capture the job id when qsub prints it. Then, turn your summarization and results collection code into a follow on job and submit it with a dependency on the completion of the first jobs. You can submit this final job with -sync y so your calling script waits for it to end. See the documentation for -hold_jid in the qsub man page.
Also, rather than making your calling script decide when to submit the next job (up to your maximum), use SGE's -tc option to specify the maximum number of simultaneous jobs (note that -tc isn't in the man page, but it is in qsub's -help output). This depends on you using a new enough version of SGE to have -tc, of course.
I am having an issue. I have a simple expect script that i call in a php script to login to a cisco terminal server to clear vty lines after a lab time is over. The logic is simple that if a user is present then it only occupies line vty 0.
The problem occurs if a user is not present... the cisco router just displays an error stating %connection to clear or something and expect run the rest of the script which is not necessary because i am managing a lab of 10 routers and loading configuration files which takes a lot of time.
Please tell me how to read the error from log and end script if i get that message.
#! /usr/bin/expect -f
spawn telnet 192.168.2.1
set user [lindex $argv 0]
set pass [lindex $argv 1]
expect "Username:"
send "$user\r"
expect "Password:"
send "$pass\r"
expect "*>"
send "enable\r"
expect "*#"
send "clear line vty 0\r"
send "\r"
expect "*#"
send "copy tftp://192.168.2.3 running-config\r"
send "config\r"
send "\r"
expect "*#\r\n"
The error handling code you are asking about is here...
send "clear line vty 0\r"
expect {
"confirm]" { send "\r" }
"% Not allowed" { send "quit\r"; exit }
}
expect "#"
This looks for either [confirm] or % Not allowed to clear current line and makes an appropriate go / no-go decision.
I made a few modifications to your script so I could run it against machines in my lab. I'm not sure how you disabled the router from asking for an enable password, but that may be something to add to your script.
If you wanted to make the script more robust, you could issue a show user before clearing lines, and only clear lines that your script isn't on. I will leave that as an exercise for you to experiment with.
#!/usr/bin/expect -f
spawn telnet 172.16.1.5
set user [lindex $argv 0]
set pass [lindex $argv 1]
expect "Username: "
send "$user\r"
expect "assword: "
send "$pass\r"
expect ">"
send "enable\r"
expect "assword: "
send "$pass\r"
expect "#"
send "clear line vty 0\r"
expect {
"confirm]" { send "\r" }
"% Not allowed" { send "quit\r"; exit }
}
expect "#"
send "copy tftp://192.168.2.3 running-config\r"
expect "Source filename *]? "
send "config\r"
expect "Destination filename *]? "
send "running\r"
expect "#"
exit