I have written a tcl script where there is wait for 20 seconds in between.
Sometimes I have to come out of the waiting (or rather stop ) in between.
I have used:
after 20000
Is there a way to stop the waiting?
If you just use:
after 20000
Then no, there's no way (other than killing the process with Ctrl+C or kill, of course).
If you want an interruptable wait in Tcl, use this:
after 20000 set done 1
vwait done
That will continue to service events and so can be interrupted. Save the token returned by after 20000 set done 1 and you can cancel the timer (e.g., if you want to reschedule it). Set the (global done) variable yourself otherwise and you'll finish waiting sooner.
In one of my previous projects at work, I had to deal with a similar situation where my scripts wait for N seconds, then continue on, unless the user hit Ctrl+C, then the wait is canceled. Below is my script, reduced to bare essential to show how it works.
package require Tclx
proc abortTest {} {
after cancel $::afterId
set ::status "Ctrl+C detected"
}
#
# Main
#
puts "start"
set status "Ctrl+C not detected"
# when Ctrl+c detected, call abortTest
signal trap SIGINT abortTest
# Wait for N seconds
set WAIT_TIME 10000
set afterId [after $WAIT_TIME {set status "Exit normally"}]
puts "Wait for [expr {$WAIT_TIME / 1000}] seconds"
vwait status; # wait until status changed
puts $status
puts "end"
Discussion
The signal command said, "if the user hit Ctrl+C, then call abortTest"
Notice that in this form, the after command will exit right away, but the TCL interpreter will execute the set status ... command 10 seconds later.
The vwait command waits until the variable status to change its value.
After 10 seconds, if the user had not hit Ctrl+C, then the variable status will be set to "Exit normally", at that time, the vwait command exits
If the user hit Ctrl+C during that time, then status will be changed and the vwait command exits
Update
While reducing the original script, I took out the part to cancel the after command. Now I am putting it back in. Notice that the after command returns an ID, which I used in abortTest to cancel the action.
Related
i have the following code ...
lassign [ chan pipe ] chan chanW
fileevent $chan readable [ list echo $chan ]
proc echo { chan } {
...
}
proc exec { var1 var2 } {
....
puts $chanW "Some output"
....
}
Now according to man fileevent will be executed when the programs idles
is it possible to forse fileevent to be executed before that. For instance is it possible to force the fileevent to be executed immediately after the channel becomes readable, to somehow give it priority .... without using threads :)
Tcl never executes an event handler at “unexpected” points; it only runs them at points where it is asked to do so explicitly or, in some configurations (such as inside wish) when it is doing nothing else. You can introduce an explicit wait for events via two commands:
update
vwait
The update command clears down the current event queue, but does not wait for incoming events (strictly, it does an OS-level wait of length zero). The vwait command will also allow true waiting to happen, waiting until a named Tcl global variable has been written to. (It uses a C-level variable trace to do this, BTW.) Doing either of these will let your code process events before returning. Note that there are a number of other wrappers around this functionality; the geturl command in the http package (in “synchronous” mode) and the tkwait command in the Tk package both do this.
The complication? It's very easy to make your code reenter itself by accident while running the event loop. This can easily end up with you making lots of nested event loop calls, running you out of stack space; don't do that. Instead, prepare for reentrancy issues (a global variable check is on of the easiest approaches to do that) so that you don't nest event loops.
Alternatively, if you're using Tcl 8.6 you can switch your code around to use coroutines. They let you stop the current procedure evaluation and return to the main event loop to wait for a future event before starting execution again: you end up with code that returns at the expected time, but which was suspended for a while first. If you want more information about this approach, please ask another separate question here.
I have a log which keeps on updating.
I am running a flow that generates a file. This flow runs at the background and
updates the log saying "[12:23:12:1] \m successfully completed (data_01)" .
As soon as I see this comment, i use this file for the next flow.
I created a popup saying "wait till the log says successfully completed", to avoid
script going to next flow and gets aborted.
But the problem is each and every time I need to check the log for that comment and
press OK in the popup.
Is there any way to capture the comment from the updating log.
I tried
set flag 0
while { $flag == 0} {
set fp [open "|tail code.log" r]
set data [ read $fp]
close $fp
set data [ split $data]
if { [ regexp {.*successfully completed.*} $data ]} {
set line $data
set flag 1
} else {
continue
}
}
This $line,i will pass it to the pop up variable so that instead to saying wait until
successfully completed. I will say "Successfully completed" .
But, This is throwing error as too many files opened and also its not waiting.
There's a limit on the number of files that can be opened at once by a process, imposed by the OS. Usually, if you are getting close to that limit then you're doing something rather wrong!
So let's back up a little bit.
The simplest way to read a log file continuously is to open a pipe from the program tail with the -f option passed in, so it only reports things added to the file instead of reporting the end each time it is run. Like this:
set myPipeline [open "|tail -f code.log"]
You can then read from this pipeline and, as long as you don't close it, you will only ever read a line once. Exiting the Tcl process will close the pipe. You can either use a blocking gets to read each line, or a fileevent so that you get a callback when a line is available. This latter form is ideal for a GUI.
Blocking form
while {[gets $myPipeline line] >= 0} {
if {[regexp {successfully completed \(([^()]+)\)} $line -> theFlowName]} {
processFlow $theFlowName
}
}
close $myPipeline
Callback form
Assuming that the pipeline is kept in blocking mode. Full non-blocking is a little more complex but follows a similar pattern.
fileevent $myPipeline readable [list GetOneLine $myPipeline]
proc GetOneLine {pipe} {
if {[gets $pipe line] < 0} {
# IMPORTANT! Close upon EOF to remove the callback!
close $pipe
} elseif {[regexp {successfully completed \(([^()]+)\)} $line -> theFlowName]} {
processFlow $theFlowName
}
}
Both of these forms call processFlow with the bit of the line extract from within the parentheses when that appears in the log. That's the part where it becomes not generic Tcl any more…
It appears that what you want to do is monitor a file and wait without hanging your UI for a particular line to be added to the file. To do this you cannot use the asynchronous IO on the file as in Tcl files are always readable. Instead you need to poll the file on a timer. In Tcl this means using the after command. So create a command that checks the time the file was last modified and if it has been changed since you last checked it, opens the file and looks for your specific data. If the data is present, set some state variable to allow your program to continue to do the next step. If not, you just schedule another call to your check function using after and a suitable interval.
You could use a pipe as you have above but you should use asynchronous IO to read data from the channel when it becomes available. That means using fileevent
Okay so what I am doing on a high level is scanning a system for all VISA devices connected to it and having them identify themselves.
The problem is that not all VISA devices support the function to identify themselves and the only way I know of to find this out is by telling the device to do just that. This force ones that are not able to identify themselves to rely on the timeout which has a minimum of 1 second. While waiting on the timeout my TCL script and the Wish application freeze until the timeout is complete. With multiple devices this leaves me with an awkward wait time that can be several seconds long where I am unable to update the user on what is happening.
Here's my code:
proc ::VISA::Scan {} {
# Open a temporary resource manager
set TemporaryResourceManagerId [::visa::open-default-rm]
# Get addresses for all devices on system
foreach address [::visa::find $TemporaryResourceManagerId "?*"] {
# Create temporary VISA channel
set TemporaryChannel [visa::open $TemporaryResourceManagerId $address]
# Have device identify itself while suppressing errors
if {![catch {puts $TemporaryChannel "*IDN?"}]} {
if {![catch {gets $TemporaryChannel} result]} {
if {![string is space $result]} {
puts $address
puts "$result \n"
}
# Clear any potential errors
puts $TemporaryChannel "*CLS"
}
}
# Destroy temporary channel
close $TemporaryChannel
unset TemporaryChannel
}
# Destroy temporary resource manager
close $TemporaryResourceManagerId
unset TemporaryResourceManagerId
}
I was wondering if there is a way to prevent this on the TCL side since I have no way of knowing what types of devices I will be querying. I've tried using "update" and "update idletasks" at several different places in the script, but it just gives me a moment in between freezes.
Any help would be appreciated. Thanks in advance!
The standard way to do this to to use tcl's event loop by setting the I/O channel to non-blocking and using fileevent or chan event; however, the tclvisa documentation states that fileevent is not supported on visa channels.
So the next best thing is to use non-blocking I/O (which just sets the timeout to 0) and either busyloop reading the channel or reading it after a delay; either of these should be handled with the event loop rather than by sprinkling update around (which has undesirable side effects).
So to busyloop you could do something like this:
proc busyread {v n} {
if {$::readdone == 1} {set ::$n "Error"}
set r [visa::read $v]
if {$r == ""} {
after 5 [list busyread $v $n]
} else {
set ::$n $r
set ::readdone 1
}
}
set f [visa::open ...]
fconfigure $f -blocking 0
after 1000 [list set ::readdone 1]
set ::readdone 0
busyread $f result
vwait ::readdone
# $result will now be either the result, or "Error"
This continuously reschedules the read as long as it keeps coming back empty.
This will need to be restructured a bit to work within a larger gui program (the vwait and timeouts would need to be done differently), but this shows the basic method.
You have to use after and fileevent to handle timeout asynchronously. It's not that easy, especially in pre-Tcl8.6: you have to split a procedure into a bunch of event handlers, passing all necessary information to them.
Schedule a timeout handler:
proc handleTimeout {channel} {
....
close $channel
.... # or do some other thing,
.... # but don't forget to remove fileevent handler if not closing!
}
....
after 1000 [list handleTimeout $TemporaryChannel]
Make channel non-blocking, install a fileevent handler:
proc tryGetsIDN {channel} {
if {[gets line]!=-1} {
# We have an answer!
# Cancel timeout handler
after cancel [list handleTimeout $TemporaryChannel]
....
}
}
....
fconfigure $TemporaryChannel -blocking 0
fileevent $TemporaryChannel readable [list tryGetsIDN $TemporaryChannel]
The hardest part: make sure you handle GUI events appropriately, e.g. if there is a "cancel" button to cancel all asynchronous handlers, make sure to close channels and cancel timeout handlers (additional bookkeeping of channels and handlers may be required here).
With Tcl 8.6, you can use coroutines to make your procedure work as a cooperative "background thread": it's easy to implement "gets with timeout" which yields from a coroutine and reenters it upon completion or timeout. No ready-to-use solution out of box yet, though.
I actually found a solution on the tclvisa side of my problem. I found a better way to specify the timeout for the channel rather than using the built in tclvisa command which I incorrectly assumed I had to use.
fconfigure $TemporaryChannel -timeout 100
Setting this timeout doesn't completely solve the problem, but it reduces it to the point of obscurity. Thanks for all the responses!
I currently have a GUI, that after some automation (using expect) allows the user to interact with one of 10 telnet'ed connections. Interaction is done using the following loop:
#After selecting an item from the menu, this allows the user to interact with that process
proc processInteraction {whichVariable id id_list user_id} {
if {$whichVariable == 1} {
global firstDead
set killInteract $firstDead
} elseif {$whichVariable == 2} {
global secondDead
set killInteract $secondDead
}
global killed
set totalOutput ""
set outputText ""
#set killInteract 0
while {$killInteract == 0} {
set initialTrue 0
if {$whichVariable == 1} {
global firstDead
set killInteract $firstDead
} elseif {$whichVariable == 2} {
global secondDead
set killInteract $secondDead
}
puts "$id: $killInteract"
set spawn_id [lindex $id_list $id]
global global_outfile
interact {
-i $spawn_id
eof {
set outputText "\nProcess closed.\n"
lset deadList $id 1
puts $outputText
#disable the button
disableOption $id $numlcp
break
}
-re (.+) {
set outputText $interact_out(0,string)
append totalOutput $outputText
#-- never looks at the following string as a flag
send_user -- $outputText
#puts $killInteract
continue
}
timeout 1 {
puts "CONTINUE"
continue
}
}
}
puts "OUTSIDE"
if {$killInteract} {
puts "really killed in $id"
set killed 1
}
}
When a new process is selected, the previous should be killed. I previously had it where if a button is clicked, it just enters this loop again. Eventually I realized that the while loops were never quitting, and after 124 button presses, it crashes (stackoverflow =P). They aren't running in the background, but they are on the stack. So I needed a way to kill the loop in the processInteraction function when a new process is started. Here is my last attempt at a solution after many failures:
proc killInteractions {} {
#global killed
global killInteract
global first
global firstDead
global secondDead
global lastAssigned
#First interaction
if {$lastAssigned == 0} {
set firstDead 0
set secondDead 1
set lastAssigned 1
#firstDead was assigned last, kill the first process
} elseif {$lastAssigned == 1} {
set firstDead 1
set secondDead 0
set lastAssigned 2
vwait killed
#secondDead was assigned last, kill the second process
} elseif {$lastAssigned == 2} {
set secondDead 1
set firstDead 0
set lastAssigned 1
vwait killed
}
return $lastAssigned
}
killInteractions is called when a button is pressed. The script hangs on vwait. I know the code seems a bit odd/wonky for handling processes with two variables, but this was a desperate last ditch effort to get this to work.
A dead signal is sent to the correct process (in the form of secondDead or firstDead). I have the timeout value set at 1 second for the interact, so that it is forced to keep checking if the while loop is true, even while the user is interacting with that telnet'ed session. Once the dead signal is sent, it waits for confirmation that the process has died (through vwait).
The issue is that once the signal is sent, the loop will never realize it should die unless it is given the context to check it. The loop needs to run until it is kicked out by first or secondDead. So there needs to be some form of wait before switching to the next process, allowing the loop in processInteraction of the previous process to have control.
Any help would be greatly appreciated.
Your code seems extremely complicated to me. However, the key problem is that you are running inner event loops (the event loop code is pretty simple-minded, and so is predictably a problem) and building up the C stack with things that are stuck. You don't want that!
Let's start by identifying where those inner event loops are. Firstly, vwait is one of the canonical event loop commands; it runs an event loop until its variable is set (by an event script, presumably). However, it is not the only one. In particular, Expect's interact also runs an event loop. This means that everything can become nested and tangled and… well, you don't want that. (That page talks about update, but it applies to all nested event looping.) Putting an event loop inside your own while is particularly likely to lead to debugging headaches.
The best route to fixing this is to rewrite the code to use continuation-passing style. Instead of writing code with nested event loops, you instead rearrange things so that you have pieces of code that are evaluated on events and which pass such state as is necessary between them without starting a nested event loop. (If you weren't using Expect and were using Tcl 8.6, I'd advise using coroutine to do this, but I don't think that works with Expect currently and it does require a beta version of Tcl that isn't widely deployed yet.)
Alas, everything is made more complicated by the need to interact with the subprocesses. There's no way to interact in the background (nor does it really make that much sense). What you instead need to do is to use exactly one interact in your whole program and to have it switch between spawned connections. You do that by giving the -i option the name of a global variable which holds the current id to interact with, instead of the id directly. (This is an “indirect” spawn id.) I think that the easiest way of making this work is to have a “not connected to anything else” spawn id (e.g., you connect it to cat >/dev/null just to act as a do-nothing) that you make at the start of your script, and then swap in the real connection when it makes sense. The actual things that you currently use interact to watch out for are best done with expect_background (remember to use expect_out instead of interact_out).
Your code is rather too long for me to rewrite, but what you should do is to look very carefully at the logic of the eof clause of the interact; it needs to do more than it does at the moment. The code to kill from the GUI should be changed too; it should send a suitable EOF marker to the spawned process(es) to be killed and not wait for the death to be confirmed.
I'm trying to run tclhttpd in a slave interpreter but slightly modified so as to run within a tclkit. The code below "runs" (I can hit http://localhost:8015) but never reaches the puts line at the bottom because "the server does not return, it enters [vwait forever]". But when I try "the after 0 trick", e.g. prepending "after 0 " to the line "$httpd eval $cmd", the server does not run at all, so I presume "errors have to be handled by bgerror"
However I cannot find good examples of how to use bgerror, plus my research shows that now the convention is to use "interp bgerror". Please see the first couple of examples returned by http://www2.tcl.tk/_/gsearch?S=bgerror; the first link contains the verbiage "fill in useful tricks and examples for using bgerror" but then there are no samples I can discern how to apply, and the second link concludes "I am interested in examples how this is supposed to be used."
package require starkit
starkit::startup
set httpd_args [list]
set httpd [interp create]
$httpd eval "set argc [llength $httpd_args]"
set cmdargv "set argv [list $httpd_args ]"
$httpd eval "set topdir $starkit::topdir"
$httpd eval $cmdargv
set cmd [list source [file join $starkit::topdir bin/httpd.tcl]]
$httpd eval $cmd
puts "if seeing this controlled has returned"
Completely edited based on the OP's comments...
The after 0 trick is the following line:
after 0 $httpd eval $cmd
What this does is tell the interp to add the command in question ($http eval $cmd) to the event queue, which means it will run once the event loop is started (or returned to if it's already started). You can see the reliance on the event loop in the following comment from that page (by Jacob Levy):
I should note that this depends on the event loop being active.
My guess is that you're running a plain Tclsh, which means you never enter the event loop (the Wish shell enters the event loop at the end of the script, the Tcl shell does not). The standard way to enter the event loop is to run the following command once you get to the end of your Tcl code:
# Enter the event loop and stay in it until someone
# sets the "forever" variable to something
vwait forever
That being said, anything you have after the vwait will not run until after the event loop is exited. If you want the httpd to run in parallel to your code, you need to either:
Use multiple threads, or write your ... which really isn't that hard
code to be event based ... which requires you understand even based programming well enough to prevent pieces of code from being starved of execution time.
Hope that helps.
I don't quite understand the question you are asking. It sounds like your goal is to start up an http server in one interpreter but somehow interact with the main interpreter. Is that right? If so, what does that have to do with bgerror?
Are you aware that even though you are running the server in a separate interpreter, it is not running in a separate thread? That is, you can't (*) interact with the main interpreter while either interpreter is blocked by a vwait.
(*) you can, if your interaction takes the form of Tk widgets that also take advantage of the event loop
As for how to use bgerror, There are a couple of ways that it works. The default mechanism calls the function 'bgerror" which you may define to do whatever you want. It takes a single string (the text of an error message) and does something with it. That something could be to print the error to stdout, show it in a dialog, write it to a file, etc.
As an example, consider this interactive session:
% proc bgerror {s} {puts "hey! I caught an error: $s"}
% # after 30 seconds, throw an error
% after 30000 {error "this is an error"}
after#0
% # after 40 seconds, terminate the event loop
% after 40000 {set ::done 1}
after#1
% # start the event loop
% vwait ::done
hey! I caught an error: this is an error
% # this prompt appears after 40 seconds or so
You can also register your own error handler, as described in the documentation for "interp bgerror". This came along in tcl 8.5, though it had a bug that wasn't fixed until 8.5.3.
For example:
% set foo [interp create]
interp0
% $foo eval {proc myErrorHandler {args} {puts "myErrorHandler: $args"}}
% $foo bgerror myErrorHandler
myErrorHandler
% # after 30 seconds, throw an error
% $foo eval {after 30000 {error "this is an error"}}
after#0
% # after 40 seconds, terminate the loop
% $foo eval {after 40000 {set ::done 1}}
after#1
% $foo eval {vwait ::done}
myErrorHandler: {this is an error} {-code 1 -level 0 -errorcode NONE -errorinfo {this is an error
while executing
"error "this is an error""
("after" script)} -errorline 1}
% # this prompt appears after 40 seconds or so
Does this help answer your question?
If I've understood correctly what you want to do, your code should look similar to that:
set httpd_id [thread::create -preserved]
thread::send $http_id "source [file join $starkit::topdir bin/httpd.tcl]"
In this way you'll have TclHttpd running in a thread, without worrying for the vwait problem
If you also want to be informed about any error during the httpd execution, TclHttp sends all the errors to a log file. You can configure the path of the Log doing:
Log_SetFile "/logs/httpd_log"
You need to have the httpd::log package.
I hope this helps.