I read the following code, but I do not understand how it works:
set accum ""
set timeout 1
expect {
-re {.+} {
set accum "${accum}$expect_out(0,string)"
exp_continue
}
}
set timeout 10
at the beginning, we set accum and timeout, then there is a expect command try to match something? and after it, we set the timeout as 10, how the whole code works? and does this mean?
Until the code times out (1 second after the last match of anything), any time it matches something (which is any sequence of characters — possibly excluding newline — because of -re {.+}) it appends it to the accum variable and restarts expecting something (the exp_continue is indeed magic).
It would be more efficient to use append accum $expect_out(0,string), but the way it is done isn't wrong.
Related
I write 2 script to do somting like this:
#script1, to dump info:
proc script1 {} {
puts $file "set a 123"
puts $file "set b 456"
.....
}
(The file size I dump is 8GB)
#And use script2 to source it and do data category:
while { [get $file_wrtie_out_by_script1 line] != -1 } {
eval $line
}
close $file_wrtie_out_by_script1
Do the job....
return
In this case, the script is hang in return, how to solve the issue... stuck 3+ days, thnaks
Update:
Thanks for Colin, now I use source instead of eval, but even remove the "Do the job...", just keep return, still hang
The gets command will return the number of characters in the line that it just read from the file channel.
When all the lines of the file have been read, then gets will return -1.
Your problem is that you have a while loop that is never ending. Your while loop will terminate when gets returns 1. You need to change the condition to -1 for the while loop to terminate.
I agree with the comment from Colin that you should just use source instead of eval for each line. Using eval line-by-line will fail if you have a multi-line command (but that might not be the case in your example).
I am trying to make a script to transfer file to another device. Since I cannot account for every error that may occur, I am trying to make an if-all-else fails situation:
spawn scp filename login#ip:filename
expect "word:"
send "password"
expect {
"100" {
puts "success"
} "\*" {
puts "Failed"
}
}
This always returns a Failed message and does not even transfer the file, where as this piece of code:
spawn scp filename login#ip:filename
expect "word:"
send "password"
expect "100"
puts "success"
shows the transfer of the file and prints a success message.
I cant understand what is wrong with my if-expect statement n the first piece of code.
The problem is because of \*. The backslash will be translated by Tcl, thereby making the \* into * alone which is then passed to expect as
expect *
As you know, * matches anything. This is like saying, "I don't care what's in the input buffer. Throw it away." This pattern always matches, even if nothing is there. Remember that * matches anything, and the empty string is anything! As a corollary of this behavior, this command always returns immediately. It never waits for new data to arrive. It does not have to since it matches everything.
I don't know why you have used *. Suppose, if your intention is to match literal asterisk sign, then use \\*.
The string \\* is translated by Tcl to \*. The pattern matcher then interprets the \* as a request to match a literal *.
expect "*" ;# matches * and? and X and abc
expect "\*" ;# matches * and? and X and abc
expect "\\*" ;# matches * but not? or X or abc
Just remember two rules:
Tcl translates backslash sequences.
The pattern matcher treats backs lashed characters as literals.
Note : Apart from question, one observation. You are referring your expect block as a if-else block. It is not same as If-Else block.
The reason is, in traditional if-else block, we know for sure that at least one of that block will be executed. But, in expect, it is not the case. It is more of like multiple if blocks alone.
I've written a expect function to get the output of a command and my code is like below
proc do_cmd {cmd id} {
set spawn_id $id
send "$cmd\r"
expect "$cmd\r"
expect {
-re "\n(.*)\r\n" {return $expect_out(1,string)}
default {exit 1}
}
}
If I call the function just once it would works fine and return something I want, but if I call it continually without a break, it would return something unwanted.
# test case 1
set ret [do_cmd $mycmd $spawn_id]
puts "$mycmd returns $ret" # the return value is ok
# test case 2
set ret [do_cmd $mycmd $spawn_id]
set ret [do_cmd $mycmd $spawn_id]
puts "$mycmd returns $ret" # the return value is not something I want
I use the 'exp_internal 1' to debug it and found that the expect_out in the second called command still holds the previous output info and caused the matched problem, so how can I clean up the expect_out buffer(I tried to set it an empty string but it doesn't work,) or is there anything else I can do to avoid this problem? Thanks in advance.
Don Libes's suggestion for your scenario is as follows,
Sometimes it is even useful to say:
expect *
Here the * matches anything. This is like saying, "I don't care what's
in the input buffer. Throw it away." This pattern always matches, even
if nothing is there. Remember that * matches anything, and the empty
string is anything! As a corollary of this behavior, this command
always returns immediately. It never waits for new data to arrive. It
does not have to since it matches everything.
Reference : Exploring Expect
In this case, after your required match, better try to save the match to some variable then simply add the code expect * at the last. This will empty the buffer. Your code can altered as below.
proc do_cmd {cmd id} {
set spawn_id $id
send "$cmd\r"
#Looks like you are looking for a particular command to arrive
expect "$cmd\r"
#Then you have one more expect here which is you want to get it
expect {
#Saving the value sub match to the variable 'result'
-re "\n(.*)\r\n" {set result $expect_out(1,string)}}
}
#Causing the buffer to clear and it will return quickly
expect *
return $result
}
Apart from this, there is one more way can be unsetting the expect_out(buffer) content itself which will remove the 'buffer' index from expect_out array which can be depicted as
unset expect_out(buffer)
When the next match happens, expect_out array will be updated the index 'buffer' and we can have the fresh expect_out(buffer) value. Replace the expect * with the above code if you prefer to use this way.
This is quite a workaround kind of stuff to get what we want actually. You can go ahead with any approach. Choice is yours. :)
I'm looking for a way to mark a pause between each entry to let the script checks each entry before going further.
example, I have this simple code :
for {set i 0} {$i<5} {incr i} {
set x [gets stdin]
if {[string is integer -strict $x]} {
puts "It's OK"
} else {
puts "It's not OK"
}
}
with this code, if I put manually the entries one by one, the script has the time to check each entry, here is the output :
5
It's OK
dd
It's not OK
kk
It's not OK
55
It's OK
99
It's OK
but now if I do a copy/paste of :
5
dd
kk
55
99
here is now the output :
5
dd
kk
55
99
It's OK
It's not OK
It's not OK
It's OK
It's OK
Is there a way to give enough time after each entry to let the script the time to check before going to the next entry ?
Thank you.
This is surprisingly hard to do. Here's why: the output of the pasted text is actually handled by the OS (it's part of the terminal emulation) before it gets into Tcl at all. While there are some things you can do (typically by calling exec /bin/stty with the right options), they don't help really all that much. For example, you can turn off echoing of the values and process all the keystrokes exactly as done (that's the -echo and raw options) but that leaves you having to do a lot of work to pretend that things are still in cooked mode (-raw) as that's what provides normal terminal input. It's a lot of work.
Theoretically, a library like readline would help: they already do the evil stty hacking for you. Except that in your specific case, they won't help as the strict interleaving model you want isn't one that is a common-enough requirement.
What I'd actually do in your position is rewrite the output so that it says what input is being checked each time, as well as the result (“"5" is OK”) as then I could take the values to parse from a file and still end up able to figure out what's going on without lots of fuss.
Either add a puts $x in there, or change the messages:
for {set i 0} {$i<5} {incr i} {
set x [gets stdin]
set ok [string is integer -strict $x]
puts [format {%s %s OK} $x [expr {$ok ? "is" : "is not"}]]
}
I currently have a GUI, that after some automation (using expect) allows the user to interact with one of 10 telnet'ed connections. Interaction is done using the following loop:
#After selecting an item from the menu, this allows the user to interact with that process
proc processInteraction {whichVariable id id_list user_id} {
if {$whichVariable == 1} {
global firstDead
set killInteract $firstDead
} elseif {$whichVariable == 2} {
global secondDead
set killInteract $secondDead
}
global killed
set totalOutput ""
set outputText ""
#set killInteract 0
while {$killInteract == 0} {
set initialTrue 0
if {$whichVariable == 1} {
global firstDead
set killInteract $firstDead
} elseif {$whichVariable == 2} {
global secondDead
set killInteract $secondDead
}
puts "$id: $killInteract"
set spawn_id [lindex $id_list $id]
global global_outfile
interact {
-i $spawn_id
eof {
set outputText "\nProcess closed.\n"
lset deadList $id 1
puts $outputText
#disable the button
disableOption $id $numlcp
break
}
-re (.+) {
set outputText $interact_out(0,string)
append totalOutput $outputText
#-- never looks at the following string as a flag
send_user -- $outputText
#puts $killInteract
continue
}
timeout 1 {
puts "CONTINUE"
continue
}
}
}
puts "OUTSIDE"
if {$killInteract} {
puts "really killed in $id"
set killed 1
}
}
When a new process is selected, the previous should be killed. I previously had it where if a button is clicked, it just enters this loop again. Eventually I realized that the while loops were never quitting, and after 124 button presses, it crashes (stackoverflow =P). They aren't running in the background, but they are on the stack. So I needed a way to kill the loop in the processInteraction function when a new process is started. Here is my last attempt at a solution after many failures:
proc killInteractions {} {
#global killed
global killInteract
global first
global firstDead
global secondDead
global lastAssigned
#First interaction
if {$lastAssigned == 0} {
set firstDead 0
set secondDead 1
set lastAssigned 1
#firstDead was assigned last, kill the first process
} elseif {$lastAssigned == 1} {
set firstDead 1
set secondDead 0
set lastAssigned 2
vwait killed
#secondDead was assigned last, kill the second process
} elseif {$lastAssigned == 2} {
set secondDead 1
set firstDead 0
set lastAssigned 1
vwait killed
}
return $lastAssigned
}
killInteractions is called when a button is pressed. The script hangs on vwait. I know the code seems a bit odd/wonky for handling processes with two variables, but this was a desperate last ditch effort to get this to work.
A dead signal is sent to the correct process (in the form of secondDead or firstDead). I have the timeout value set at 1 second for the interact, so that it is forced to keep checking if the while loop is true, even while the user is interacting with that telnet'ed session. Once the dead signal is sent, it waits for confirmation that the process has died (through vwait).
The issue is that once the signal is sent, the loop will never realize it should die unless it is given the context to check it. The loop needs to run until it is kicked out by first or secondDead. So there needs to be some form of wait before switching to the next process, allowing the loop in processInteraction of the previous process to have control.
Any help would be greatly appreciated.
Your code seems extremely complicated to me. However, the key problem is that you are running inner event loops (the event loop code is pretty simple-minded, and so is predictably a problem) and building up the C stack with things that are stuck. You don't want that!
Let's start by identifying where those inner event loops are. Firstly, vwait is one of the canonical event loop commands; it runs an event loop until its variable is set (by an event script, presumably). However, it is not the only one. In particular, Expect's interact also runs an event loop. This means that everything can become nested and tangled and… well, you don't want that. (That page talks about update, but it applies to all nested event looping.) Putting an event loop inside your own while is particularly likely to lead to debugging headaches.
The best route to fixing this is to rewrite the code to use continuation-passing style. Instead of writing code with nested event loops, you instead rearrange things so that you have pieces of code that are evaluated on events and which pass such state as is necessary between them without starting a nested event loop. (If you weren't using Expect and were using Tcl 8.6, I'd advise using coroutine to do this, but I don't think that works with Expect currently and it does require a beta version of Tcl that isn't widely deployed yet.)
Alas, everything is made more complicated by the need to interact with the subprocesses. There's no way to interact in the background (nor does it really make that much sense). What you instead need to do is to use exactly one interact in your whole program and to have it switch between spawned connections. You do that by giving the -i option the name of a global variable which holds the current id to interact with, instead of the id directly. (This is an “indirect” spawn id.) I think that the easiest way of making this work is to have a “not connected to anything else” spawn id (e.g., you connect it to cat >/dev/null just to act as a do-nothing) that you make at the start of your script, and then swap in the real connection when it makes sense. The actual things that you currently use interact to watch out for are best done with expect_background (remember to use expect_out instead of interact_out).
Your code is rather too long for me to rewrite, but what you should do is to look very carefully at the logic of the eof clause of the interact; it needs to do more than it does at the moment. The code to kill from the GUI should be changed too; it should send a suitable EOF marker to the spawned process(es) to be killed and not wait for the death to be confirmed.