Expect flushing buffer - tcl

I am writing a tcl/expect script to check for the a string output of an event and if found then do something. Below is the code i have,
proc cli_detect_event {cmd value} {
cli_send "$cmd"
expect -timeout 3 $value {
} timeout fail
}
So when i send $cmd i get and event which should match $value hopefully. I was wanting to know is there a way to prevent what's in the expect_out(buffer) from being thrown away when expect is used again after this proc, so that I could expect match on the same outputs from the command I sent?

The expect buffer variable is associated with its spawn_id variable, thus to ensure that your expect_out(buffer) is used you can just pass in the spawn id
proc cli_detect_event {cmd value spawnId } {
cli_send "$cmd"
expect -i $spawnId -timeout 3 $value {
} timeout fail
}
This should fix your issue. The only thing is that you need to ensure you save the spawn_id when you spawn a process

AFAIK no.
If the timeout occured then the buffer can be searched by next expect clause. But if the $value matched then everything up to this point including the $value itself is thrown away from the buffer (and printed to user).

Related

How to get responses in expect (tcl)

I am trying to query bluetoothctl using expect (tcl), but I cannot seem to get the bluetoothctl responses saved to a variable for processing with tcl.
For example:
spawn bluetoothctl
exp_send "scan on\n"
expect {
-re {*NEW*} {
set new $expect_out(0,string)
puts "scan - found $new"
exp_continue
}
timeout {
exp_send "scan off\n"
exp_send "quit\n"
close
wait
puts "EXPECT timed out"
}
}
The result of the above is along the lines of:
[bluetooth]# scan on
Discovery started
[CHG] Controller 10:08:B1:57:35:62 Discovering: yes
[NEW] Device EB:06:EF:34:04:B7 MPOW-059
[bluetooth]#
EXPECT timed out
So nothing is output until expect is closed. I have been trying this all day with different combinations but - I am stuck. Any help would be appreciated. Thanks
Edit: changed the regex to (.NEW.) and that works. So now I get:
[bluetooth]# scan on
Discovery started
[CHG] Controller 10:08:B1:57:35:62 Discovering: yes
[NEW] Device EB:06:EF:34:04:B7 MPOW-059
[bluetooth]# scan - found scan on
Agent registered
[bluetooth]# scan on
Discovery started
[CHG] Controller 10:08:B1:57:35:62 Discovering: yes
[NEW
which is everything except the bit that I wanted to retrieve viz:
[NEW] Device EB:06:EF:34:04:B7 MPOW-059
That regular expression looks syntactically wrong. If you did {.*NEW.*} then it might work. Assuming that those three letters are actually being output by bluetoothctl with no control characters mixed in. (It'd be weird to do that, but some code is weird…)
Apart from that, have you tried the diagnostic mode for expect? Pass the -d flag to the expect program when you start it to get lots of output about what it is really seeing and looking for.
So the answer appears to be:
The expect_out(buffer) is cleared by a puts statement
Find all the possible responses expected making sure that the expected response specifies the whole line.
Save the buffer in a variable if required
Issue a puts statement to clear the buffer
So:
expect {
"Hello" {
puts "$expect_out(buffer)"
exp_continue
}
-re (How.*) {
set answer $expect_out(buffer)
if {$answer == "How are you"} {
exp_send "Well thank you"
}
}
or, in the example above:
expect {
"Discovery started" {
puts $expect_out(buffer)
exp_continue
}
-re (.CHG.*) {
puts $expect_out(buffer)
exp_continue
}
-re (.NEW.*) {
set new $expect_out(buffer)

TCL_Expect:: How to append /save "expect_out(buffer)" into a file continuously

My TCL/Expect script:
foreach data $test url $urls {
send "test app website 1\r"
expect "*#"
send "commit \r"
expect "*#"
after 2000
exec cmd.exe /c start iexplore.exe $url
after 3000
exec "C:/WINDOWS/System32/taskkill.exe" /IM IEXPLORE.EXE /F
send "show application stats $data\r"
expect "*#"
expect "?" {
puts [open urllist.txt w] $expect_out(buffer)
}
}
The above script is working fine except storing the output of $expect_out(buffer) into a file.
I want to send $expect_out(buffer) output to "urllist.txt" file continuously.
Please suggest me a way to achieve this.
Thanks in advance.
The following trace procedure writes the value of expect_out (buffer) to a file specific to the spawn id.
proc log_by_tracing {array element op} {
uplevel {
global logfile
set file $logfile($expect_out(spawn_id))
puts -nonewline $file $expect_out(buffer)
}
}
The association between the spawn id and each log file is made in the array logfile which contains a pointer to the log file based on the spawn id. Such an association could be made with the following code when each process is spawned.
spawn <some_app_name_here>
set logfile($spawn_id) [open exp_buffer.log w]
The trace has to be added in the code as
trace variable expect_out(buffer) w log_by_tracing
Internally, the expect command saves the spawn_id element of expect_out after the X, string elements but before the buffer element in the expect_out array. For this reason, the trace must be triggered by the buffer element rather than the spawn_id element.
Note : If you are not bother about much of spawned process or using only one spawned process or no spawned process at all, there is a simple way of doing the same, then it would be much easy.
Consider the following example.
proc log_by_tracing {array element op} {
uplevel {
puts -nonewline $file $expect_out(buffer)
}
}
set file [ open myfile.log w ]
trace variable expect_out(buffer) w log_by_tracing
set timeout 60
expect {
quit { exit 1 }
timeout { exp_continue }
}
If you run the code, whatever you type in console till you type 'quit', the program will run and eventually it will be recorded in the file named 'myfile.log'
You can simply add the proc log_by_tracing and the trace statement into your code. Remember with this simple way, only one instance of expect_out(buffer) can be saved.
Reference : trace, uplevel & EXploring Expect

why status of the child process is non-zero?

Consider this code:
set status [catch {eval exec $Executable $options | grep "(ERROR_|WARNING)*" ># stdout} errorMessage]
if { $status != 0 } {
return -code error ""
}
In case of errors in the child process, they are outputted in the stdout. But if there are no errors in the child process, the status value still non-zero. How avoid this?
Also is there are some ways to use fileutil::grep instead of bash grep?
In case of errors in the child process, they are outputted in the stdout. But if there are no errors in the child process, the status value still non zero. How avoid this?
There's no connection between writing something to any file descriptor (including the one connected to the "standadrd error stream") and returning a non-zero exit code as these concepts are completely separate as far as an OS is concerned. A process is free to perform no I/O at all and return a non-zero exit code (a somewhat common case for Unix daemons, which log everything, including errors, through syslog), or to write something to its standard error stream and return zero when exiting — a common case for software which write certain valuable data to their stdout and provide diagnostic messages, when requested, to their stderr.
So, first verify your process writes nothing to its standard error and still exits with non-zero exit code using plain shell
$ that_process --its --command-line-options and arguments if any >/dev/null
$ echo $?
(the process should print nothing, and echo $? should print a non-zero number).
If the case is true, and you're sure the process does not think something is wrong, you'll have to work around it using catch and processing the extended error information it returns — ignoring the case of the process exiting with the known exit code and propagating every other error.
Basically:
set rc [catch {exec ...} out]
if {$rc != 0} {
global errorCode errorInfo
if {[lindex $errorCode 0] ne "CHILDSTATUS"} {
# The error has nothing to do with non-zero process exit code
# (for instance, the program wasn't found or the current user
# did not have the necessary permissions etc), so pass it up:
return -code $rc -errorcode $errorCode -errorinfo $errorInfo $out
}
set exitcode [lindex $errorCode 2]
if {$exitcode != $known_exit_code} {
# Unknown exit code, propagate the error:
return -code $rc -errorcode $errorCode -errorinfo $errorInfo $out
}
# OK, do nothing as it has been a known exit code...
}
CHILDSTATUS (and the errorCode global variable in general) is described here.

How to search for multiple patterns stored in a list until all items are found or a set amount of time has passed

I'm making a simple expect script that will monitor the output of tcpdump for a list of multicast addresses. I want to know if packets are received or not from each multicast address in the list before expect times out.
I have a working solution, but it is inefficient and I believe I'm not utilizing the full power of expect and tcl. Anyway here is my current script:
set multicast_list {225.0.0.1 225.0.0.2 225.0.0.3}
send "tcpdump -i ixp1\r"
# If tcpdump does not start, unzip it and run it again
expect {
"tcpdump: listening on ixp1" {}
"sh: tcpdump: not found" {
send "gunzip /usr/sbin/tcpdump.gz\r"
expect "# "
send "tcpdump -i ixp1\r"
exp_continue
}
}
# Set timeout to the number of seconds expect will check for ip addresses
set timeout 30
set found [list]
set not_found [list]
foreach ip $multicast_list {
expect {
"> $ip" { lappend found "$ip" }
timeout { lappend not_found "$ip" }
}
}
set timeout 5
# Send ^c to stop tcpdump
send -- "\003"
expect "# "
So as you can see the script will look for each ip address one at a time and if the ip is seen it will add it to the list of found addresses. If expect times out it will add the address to the not_found list and search for the next address.
Now back to my question: Is there a way in which I can monitor tcpdump for all IP addresses simultaneously over a given amount of time. If the address were to be found I want to add it to the list of found addresses and ideally stop expecting it (this may not be possible, I'm not sure). The key is I need the script to monitor for all IP's in the list in parallel. I can't hard code each address because they will be different each time and the amount of addresses I am looking for will also vary. I could really use some help from an expect guru lol.
Thank You!
That's an interesting problem. The easiest way is probably to do runtime generation of the core of the expect script. Fortunately, Tcl's very good at that sort of thing. (Note: I'm assuming that IP addresses are all IPv4 addresses and consist of just numbers and periods; if it was a general string being inserted, I'd have to be a little more careful.)
set timeout 30
set found [list]
set not_found [list]
# Generate the timeout clause as a normal literal
set expbody {
timeout {
set not_found [array names waiting]
unset waiting
}
}
foreach ip $multicast_list {
set waiting($ip) "dummy"
# Generate the per-ip clause as a multi-line string; beware a few backslashes
append expbody "\"> $ip\" {
lappend found $ip
unset waiting($ip)
if {\[array size waiting\]} exp_continue
}\n"
}
# Feed into expect; it's none-the-wiser that it was runtime-generated
expect $expbody
set timeout 5
# Send ^c to stop tcpdump
send -- "\003"
expect "# "
You might want to puts $expbody the first few times, just so you can be sure that it is doing the right thing.
Here is my finished script. It uses the same code from Donal's solution, but I added a few checks to fix some issues that weren't accounted for.
set multicast_list {225.0.0.1 225.0.0.2 225.0.0.3}
set tcpdump_timeout 10
spawn /bin/bash
expect "] "
# Create the runtime-generated expbody to use later
# Generate the timeout clause as a normal literal
set expbody {
timeout {
set not_found [array names waiting]
unset waiting
}
}
foreach ip $multicast_list {
set waiting($ip) "dummy"
# Generate the per-ip clause as a multi-line string; beware a few backslashes
append expbody "\"> $ip\" {
set currentTime \[clock seconds\]
if { \$currentTime < \$endTime } {
if { \[ info exists waiting($ip) \] } {
lappend found $ip
unset waiting($ip)
}
if {\[array size waiting\]} exp_continue
}
}\n"
}
# Set expect timeout and create empty lists for tcpdump results
set timeout $tcpdump_timeout
set found [list]
set not_found [list]
# Start tcpdump
send "tcpdump -i ixp1\r"
expect "tcpdump: listening on ixp1"
# Get the time to stop tcpdump
set endTime [ expr [clock seconds] + $tcpdump_timeout ]
# Feed expbody into expect; it's none-the-wiser that it was runtime-generated
expect $expbody
set not_found [array names waiting]
unset waiting
# Send ^c to stop tcpdump
send -- "\003"
expect "# "

One interpreter/thread per connection?

I want to write a server where people log in, send/type some commands and log out. Many persons may be connected at the same time, but I don't want to have a lot of state variables for each person, like "is sending name", "is sending password", "is in the second stage of the upload command"... It would be much easier to run one invocation of this script for each incoming connection:
puts -nonewline $out "login: "
gets $in login ;# check for EOF
puts -nonewline $out "password: "
gets $in password ;# check for EOF
while {[gets $in command] >= 0} {
switch -- $command {
...
}
}
Would memory and speed be OK with creating one interpreter per connection, even if there's about 50 connections? Or is this what you can do with threads?
A little bit of experimentation (watching an interactive session with system tools) indicates that each Tcl interpreter within a Tcl application process, with no additional user commands, takes somewhere between 300kB and 350kB. User commands and scripts are extra on top of that, as are stack frames (necessary to run anything in an interpreter). Multiplying up, you get maybe 17MB for 50 interpreter contexts, which any modern computer will handle without skipping a beat. Mind you, interpreters don't allow for simultaneous execution.
Threads are heavier weight, as Tcl's thread model has each thread having its own master interpreter (and in fact all interpreters are strictly bound to a single thread, a technique used to greatly reduce the amount of global locks in Tcl's implementation). Because of this, the recommended number of threads will depend massively on the number of available CPUs in your deployment hardware and the degree to which your code is CPU bound as opposed to IO bound.
If you can use Tcl 8.6 (8.6.0 is tagged for release in the repository as I write this, but not shipped) then you can use coroutines to model the connection state. They're much lighter weight than an interpreter, and can be used to do a sort of cooperative multitasking:
# Your code, with [co_gets] (defined below) instead of [gets]
proc interaction_body {in out} {
try {
puts -nonewline $out "login: "
co_gets $in login ;# check for EOF
puts -nonewline $out "password: "
co_gets $in password ;# check for EOF
if {![check_login $login $password]} {
# Login failed; go away...
return
}
while {[co_gets $in command] >= 0} {
switch -- $command {
...
}
}
} finally {
close $in
}
}
# A coroutine-aware [gets] equivalent. Doesn't handle the full [gets] syntax
# because I'm lazy and only wrote the critical bits.
proc co_gets {channel varName} {
upvar 1 $varName var
fileevent $channel readable [info coroutine]
while 1 {
set n [gets $channel var]
if {$n >= 0 || ![fblocked $channel]} {
fileevent $channel readable {}
return $n
}
yield
}
}
# Create the coroutine wrapper and set up the channels
proc interaction {sock addr port} {
# Log connection by ${addr}:${port} here?
fconfigure $sock -blocking 0 -buffering none
coroutine interaction_$sock interaction_body $sock $sock
}
# Usual tricks for running a server in Tcl
socket -server interaction 12345; # Hey, that port has the same number as my luggage!
vwait forever
This isn't suitable if you need to do CPU intensive processing and you need to be careful about securing logins (consider using the tls package to secure the connection with SSL).