Fileevent executed before proc finishes - tcl

i have the following code ...
lassign [ chan pipe ] chan chanW
fileevent $chan readable [ list echo $chan ]
proc echo { chan } {
...
}
proc exec { var1 var2 } {
....
puts $chanW "Some output"
....
}
Now according to man fileevent will be executed when the programs idles
is it possible to forse fileevent to be executed before that. For instance is it possible to force the fileevent to be executed immediately after the channel becomes readable, to somehow give it priority .... without using threads :)

Tcl never executes an event handler at “unexpected” points; it only runs them at points where it is asked to do so explicitly or, in some configurations (such as inside wish) when it is doing nothing else. You can introduce an explicit wait for events via two commands:
update
vwait
The update command clears down the current event queue, but does not wait for incoming events (strictly, it does an OS-level wait of length zero). The vwait command will also allow true waiting to happen, waiting until a named Tcl global variable has been written to. (It uses a C-level variable trace to do this, BTW.) Doing either of these will let your code process events before returning. Note that there are a number of other wrappers around this functionality; the geturl command in the http package (in “synchronous” mode) and the tkwait command in the Tk package both do this.
The complication? It's very easy to make your code reenter itself by accident while running the event loop. This can easily end up with you making lots of nested event loop calls, running you out of stack space; don't do that. Instead, prepare for reentrancy issues (a global variable check is on of the easiest approaches to do that) so that you don't nest event loops.
Alternatively, if you're using Tcl 8.6 you can switch your code around to use coroutines. They let you stop the current procedure evaluation and return to the main event loop to wait for a future event before starting execution again: you end up with code that returns at the expected time, but which was suspended for a while first. If you want more information about this approach, please ask another separate question here.

Related

Button with multiple command line arguments

Can a button in tcl could be linked to multiple command line arguments ?
I have a code which runs when a button is clicked, a progressbar code with time in seconds is also linked with it , and should start at same time when this button is pressed.
I put both procs as a command in button command argument using {} but it fails with Error.
Code Snippet
button .b -image $p -command {progressbar 300 run_structural_comparision}
proc progressbar {seconds} {
ttk::progressbar .pg -orient horizontal -mode determinate -maximum $seconds
pack .pg -side left
update idletasks
# Do some real work here for $seconds seconds
for {set i 0} {$i < $seconds} {incr i} {
after 1000; # Just waiting in this example, might as well do something useful here
.pg step; # After some part of the work, advance the progressbar
update idletasks; # Needed to update the progressbar
}
# Done, clean up the dialog and progressbar
}
proc run_structural_comparision {} {
type_run
global ENTRYfilename ENTRYfilename2 curDIR curDIR2 typep reflib compLib rundir hvt_verilog logfile
set path [concat $reflib $compLib]
## set path [concat $ENTRYfilename $ENTRYfilename2]
puts $path
set str "compare_structure -overlap_when -type {timing constraint} -report compare_structure_"
set trt ".txt"
set structure [concat [string trim $str][string trim $typep][string trim $trt] $path]
puts $structure
puts $rundir
cd $rundir
set filename [concat "compare_structure_" $typep ".tcl"]
if {[ file exists $rundir/$filename] == 1 } {
exec rm -rf $rundir/compare_structure_$typep.tcl
}
A button's -command callback is a Tcl script. It will be evaluated at the global level of the stack. If you want to run two commands, you can just put a script in there to run the two commands:
button .b -command { command_1; command_2 }
This will run them sequentially. Tcl is naturally single-threaded as that is by far the easiest programming model for people to work with. It's possible to make code that works by doing callbacks to appear to be doing multiple things at once (that's how Tk works, just like virtually all other GUIs on the planet) but you're really only doing one thing at a time.
But your real question…
The core of what you need is a way to run the program that takes a long time in the background so that you can monitor it and continue to update the GUI. This is not a trivial topic, unfortunately, and the right answer will depend on exactly what is going on.
One of the simplest techniques is where the CPU-bound processing is done in a subprocess. In that case, you can run the subprocess via a pipeline and set fileevent to give you a notification callback when output is produced or the program terminates. Tcl is very good at this sort of thing; things that many languages have as very advanced techniques just feel natural when done with Tcl, as a great deal of thought has been put into how to make I/O callbacks work nicely.
If it's in-process and long-running without the opportunity for callbacks, things get more complex as you have to have the processing and the GUI updates in different threads. Which isn't too hard if you've got everything set up right, but which might require substantial re-architecting of your program (as it is usual for threads in Tcl to be extremely strongly partitioned from each other).
The simplest thing to do is to create a procedure that calls the two functions. If you wantie:
proc on_button_press {seconds} {
after idle [list progressbar $seconds]
after idle [list run_structural_comparision]
}
You can put multiple calls in the immediate button handler command string but it quickly gets complicated. But in short, use a semicolon to separate the two commands.
Your use if update idletasks should be considered a "code smell". ie: avoid it. In this case, in the progressbar function, setup the bar then just have everything else called by after calls to update the state of the progress.
I suspect your rm -rf may not do what you want. It it likely to lockup the interface as you get nothing back until the command has completed. Better is to write a function to walk the directory tree and delete the files with file delete and you can then raise progress events as you go and keep the UI alive by breaking up the processing into chunks using after again.

How to interrupt Tcl_eval

I have a small shell application that embeds Tcl 8.4 to execute some set of Tcl code. The Tcl interpreter is initialized using Tcl_CreateInterp. Everything is very simple:
user types Tcl command
the command gets passed to Tcl_Eval for evaluation
repeat
Q: Is there any way to interrupt a very long Tcl_Eval command? I can process a 'Ctrl+C' signal, but how to interrupt Tcl_Eval?
Tcl doesn't set signal handlers by default (except for SIGPIPE, which you probably don't care about at all) so you need to use an extension to the language to get the functionality you desire.
By far the simplest way to do this is to use the signal command from the TclX package (or from the Expect package, but that's rather more intrusive in other ways):
package require Tclx
# Make Ctrl+C generate an error
signal error SIGINT
Just evaluate a script containing those in the same interpreter before using Tcl_Eval() to start running the code you want to be able to interrupt; a Ctrl+C will cause that Tcl_Eval() to return TCL_ERROR. (There are other things you can do — such as running an arbitrary Tcl command which can trap back into your C code — but that's the simplest.)
If you're on Windows, the TWAPI package can do something equivalent apparently.
Here's a demonstration of it in action in an interactive session!
bash$ tclsh8.6
% package require Tclx
8.4
% signal error SIGINT
% puts [list [catch {
while 1 {incr i}
} a b] $a $b $errorInfo $errorCode]
^C1 {can't read "i": no such variableSIGINT signal received} {-code 1 -level 0 -errorstack {INNER push1} -errorcode {POSIX SIG SIGINT} -errorinfo {can't read "i": no such variableSIGINT signal received
while executing
"incr i"} -errorline 2} {can't read "i": no such variableSIGINT signal received
while executing
"incr i"} {POSIX SIG SIGINT}
%
Note also that this can leave the interpreter in a somewhat-odd state; the error message is a little bit odd (and in fact that would be a bug, but I'm not sure what in). It's probably more elegant to do it like this (in 8.6):
% try {
while 1 {incr i}
} trap {POSIX SIG SIGINT} -> {
puts "interrupt"
}
^Cinterrupt
%
Another way to solve this problem would be to fork your tcl interpreter into a separate process and driving the stdin and stdout of the tcl interpreter from your main process. Then, in the main process, you can intercept Ctrl-C and use it to kill the process of your forked tcl interpreter and to refork a new tcl interpreter.
With this solution the tcl interpreter will never lock up on your main program. However, its really annoying to add c-function extension if they need them to run in the main process because you need to use inter-process communication to invoke functions.
I have a similar problem I was trying to solve, where I start the TCL interpret in a worker thread. Except, there's really no clean way to kill a worker thread because it leave allocated memory in an uncleaned up state, leading to memory leaks. So really the only way to fix this problem is to use a process model instead or to just keep quitting and restarting your application. Given the amount of time it takes to go with process solution I just decided to stick with threads and fix the problem one of these days to get the ctrl-c to work in a separate process, rather than leaking memory everytime i kill a thread. and potential destabilizing and crashing my program.
UPDATE:
My conclusion is that Tcl Arrays are not normal variables and you can't use Tcl_GetVar2Ex to read "tmp" variable after Eval and tmp doesn't show up under "info globals". So to get around this I decided to directly call the Tcl-Library API rather than Eval shortcut to build a dictionary object to return.
Tcl_Obj* dict_obj = Tcl_NewDictObj ();
if (!dict_obj) {
return TCL_ERROR;
}
Tcl_DictObjPut (
interp,
dict_obj,
Tcl_NewStringObj ("key1", -1),
Tcl_NewStringObj ("value1", -1)
);
Tcl_DictObjPut (
interp,
dict_obj,
Tcl_NewStringObj ("key2", -1),
Tcl_NewStringObj ("value2", -1)
);
Tcl_SetObjResult(interp, dict_obj);

Stop a TCL script from freezing while processing a command

Okay so what I am doing on a high level is scanning a system for all VISA devices connected to it and having them identify themselves.
The problem is that not all VISA devices support the function to identify themselves and the only way I know of to find this out is by telling the device to do just that. This force ones that are not able to identify themselves to rely on the timeout which has a minimum of 1 second. While waiting on the timeout my TCL script and the Wish application freeze until the timeout is complete. With multiple devices this leaves me with an awkward wait time that can be several seconds long where I am unable to update the user on what is happening.
Here's my code:
proc ::VISA::Scan {} {
# Open a temporary resource manager
set TemporaryResourceManagerId [::visa::open-default-rm]
# Get addresses for all devices on system
foreach address [::visa::find $TemporaryResourceManagerId "?*"] {
# Create temporary VISA channel
set TemporaryChannel [visa::open $TemporaryResourceManagerId $address]
# Have device identify itself while suppressing errors
if {![catch {puts $TemporaryChannel "*IDN?"}]} {
if {![catch {gets $TemporaryChannel} result]} {
if {![string is space $result]} {
puts $address
puts "$result \n"
}
# Clear any potential errors
puts $TemporaryChannel "*CLS"
}
}
# Destroy temporary channel
close $TemporaryChannel
unset TemporaryChannel
}
# Destroy temporary resource manager
close $TemporaryResourceManagerId
unset TemporaryResourceManagerId
}
I was wondering if there is a way to prevent this on the TCL side since I have no way of knowing what types of devices I will be querying. I've tried using "update" and "update idletasks" at several different places in the script, but it just gives me a moment in between freezes.
Any help would be appreciated. Thanks in advance!
The standard way to do this to to use tcl's event loop by setting the I/O channel to non-blocking and using fileevent or chan event; however, the tclvisa documentation states that fileevent is not supported on visa channels.
So the next best thing is to use non-blocking I/O (which just sets the timeout to 0) and either busyloop reading the channel or reading it after a delay; either of these should be handled with the event loop rather than by sprinkling update around (which has undesirable side effects).
So to busyloop you could do something like this:
proc busyread {v n} {
if {$::readdone == 1} {set ::$n "Error"}
set r [visa::read $v]
if {$r == ""} {
after 5 [list busyread $v $n]
} else {
set ::$n $r
set ::readdone 1
}
}
set f [visa::open ...]
fconfigure $f -blocking 0
after 1000 [list set ::readdone 1]
set ::readdone 0
busyread $f result
vwait ::readdone
# $result will now be either the result, or "Error"
This continuously reschedules the read as long as it keeps coming back empty.
This will need to be restructured a bit to work within a larger gui program (the vwait and timeouts would need to be done differently), but this shows the basic method.
You have to use after and fileevent to handle timeout asynchronously. It's not that easy, especially in pre-Tcl8.6: you have to split a procedure into a bunch of event handlers, passing all necessary information to them.
Schedule a timeout handler:
proc handleTimeout {channel} {
....
close $channel
.... # or do some other thing,
.... # but don't forget to remove fileevent handler if not closing!
}
....
after 1000 [list handleTimeout $TemporaryChannel]
Make channel non-blocking, install a fileevent handler:
proc tryGetsIDN {channel} {
if {[gets line]!=-1} {
# We have an answer!
# Cancel timeout handler
after cancel [list handleTimeout $TemporaryChannel]
....
}
}
....
fconfigure $TemporaryChannel -blocking 0
fileevent $TemporaryChannel readable [list tryGetsIDN $TemporaryChannel]
The hardest part: make sure you handle GUI events appropriately, e.g. if there is a "cancel" button to cancel all asynchronous handlers, make sure to close channels and cancel timeout handlers (additional bookkeeping of channels and handlers may be required here).
With Tcl 8.6, you can use coroutines to make your procedure work as a cooperative "background thread": it's easy to implement "gets with timeout" which yields from a coroutine and reenters it upon completion or timeout. No ready-to-use solution out of box yet, though.
I actually found a solution on the tclvisa side of my problem. I found a better way to specify the timeout for the channel rather than using the built in tclvisa command which I incorrectly assumed I had to use.
fconfigure $TemporaryChannel -timeout 100
Setting this timeout doesn't completely solve the problem, but it reduces it to the point of obscurity. Thanks for all the responses!

How to create a thread in tcl 8.4

I am new to tcl. I want to create a thread in tcl which should keep calling itself in background.
#!/usr/bin/env tclsh
set serPort [open "/dev/ttyS0" RDWR]
fconfigure $serPort -mode 115200,n,8,1 -blocking 0
while { 1 } {
set data [gets $chan]
puts $data
}
I want to avoid using the above while loop and create a repeatable thread for the functionality inside the while loop. Basically i am connecting the COM1 of my PC to a device and getting the serial data from the device. But if there is no data on the port it still doesn't come out of loop even if i use "eof" command. That is the reason i want to create the thread.
I am planning to use Tcl_CreateThread for that but I don't understood how to use it
Don't do that. Instead, use the usual Tcl's idiom for working with non-blocking channels: set up a handler for the "channel readable" event, and enter the event loop; when the device sends data to the port you opened, the OS passes the data to your application and the callback gets called.
A minimal program to demonstrate the concept looks like this:
proc my_read_handler ch {
set data [read $ch]
if {[eof $ch]} {
close $ch
set ::forever done ;# this makes the call to `vwait` below to quit
return
}
# Otherwise process the data here ...
}
set serPort [open "/dev/ttyS0" RDWR]
fconfigure $serPort -mode 115200,n,8,1 -blocking no
fileevent $serPort readable [list my_read_handler $serPort]
vwait ::forever ;# the program enters the event loop here
Read more on this in the examples.
Several observations:
The EOF only happens when the remote side closes. If you call close on your channel, the "readable" even won't be called in this case.
If you're writing a Tk application, it will already have an event loop, so no calls to vwait are necessary (moreover, they're highly advised against, as this will re-enter the event loop): you just open your device, say, in a code which executes when the users clicks a button, set up the readable callback on the acquired channel and then just do the rest of the processing in that callback (as shown above).
Read this (and the links there) for more info on the event-oriented programming. Also search the wiki — it contains lots of examples and background knowledge.

Interrupting a while loop by a variable, then using vwait to wait for the loop to die. Not Working?

I currently have a GUI, that after some automation (using expect) allows the user to interact with one of 10 telnet'ed connections. Interaction is done using the following loop:
#After selecting an item from the menu, this allows the user to interact with that process
proc processInteraction {whichVariable id id_list user_id} {
if {$whichVariable == 1} {
global firstDead
set killInteract $firstDead
} elseif {$whichVariable == 2} {
global secondDead
set killInteract $secondDead
}
global killed
set totalOutput ""
set outputText ""
#set killInteract 0
while {$killInteract == 0} {
set initialTrue 0
if {$whichVariable == 1} {
global firstDead
set killInteract $firstDead
} elseif {$whichVariable == 2} {
global secondDead
set killInteract $secondDead
}
puts "$id: $killInteract"
set spawn_id [lindex $id_list $id]
global global_outfile
interact {
-i $spawn_id
eof {
set outputText "\nProcess closed.\n"
lset deadList $id 1
puts $outputText
#disable the button
disableOption $id $numlcp
break
}
-re (.+) {
set outputText $interact_out(0,string)
append totalOutput $outputText
#-- never looks at the following string as a flag
send_user -- $outputText
#puts $killInteract
continue
}
timeout 1 {
puts "CONTINUE"
continue
}
}
}
puts "OUTSIDE"
if {$killInteract} {
puts "really killed in $id"
set killed 1
}
}
When a new process is selected, the previous should be killed. I previously had it where if a button is clicked, it just enters this loop again. Eventually I realized that the while loops were never quitting, and after 124 button presses, it crashes (stackoverflow =P). They aren't running in the background, but they are on the stack. So I needed a way to kill the loop in the processInteraction function when a new process is started. Here is my last attempt at a solution after many failures:
proc killInteractions {} {
#global killed
global killInteract
global first
global firstDead
global secondDead
global lastAssigned
#First interaction
if {$lastAssigned == 0} {
set firstDead 0
set secondDead 1
set lastAssigned 1
#firstDead was assigned last, kill the first process
} elseif {$lastAssigned == 1} {
set firstDead 1
set secondDead 0
set lastAssigned 2
vwait killed
#secondDead was assigned last, kill the second process
} elseif {$lastAssigned == 2} {
set secondDead 1
set firstDead 0
set lastAssigned 1
vwait killed
}
return $lastAssigned
}
killInteractions is called when a button is pressed. The script hangs on vwait. I know the code seems a bit odd/wonky for handling processes with two variables, but this was a desperate last ditch effort to get this to work.
A dead signal is sent to the correct process (in the form of secondDead or firstDead). I have the timeout value set at 1 second for the interact, so that it is forced to keep checking if the while loop is true, even while the user is interacting with that telnet'ed session. Once the dead signal is sent, it waits for confirmation that the process has died (through vwait).
The issue is that once the signal is sent, the loop will never realize it should die unless it is given the context to check it. The loop needs to run until it is kicked out by first or secondDead. So there needs to be some form of wait before switching to the next process, allowing the loop in processInteraction of the previous process to have control.
Any help would be greatly appreciated.
Your code seems extremely complicated to me. However, the key problem is that you are running inner event loops (the event loop code is pretty simple-minded, and so is predictably a problem) and building up the C stack with things that are stuck. You don't want that!
Let's start by identifying where those inner event loops are. Firstly, vwait is one of the canonical event loop commands; it runs an event loop until its variable is set (by an event script, presumably). However, it is not the only one. In particular, Expect's interact also runs an event loop. This means that everything can become nested and tangled and… well, you don't want that. (That page talks about update, but it applies to all nested event looping.) Putting an event loop inside your own while is particularly likely to lead to debugging headaches.
The best route to fixing this is to rewrite the code to use continuation-passing style. Instead of writing code with nested event loops, you instead rearrange things so that you have pieces of code that are evaluated on events and which pass such state as is necessary between them without starting a nested event loop. (If you weren't using Expect and were using Tcl 8.6, I'd advise using coroutine to do this, but I don't think that works with Expect currently and it does require a beta version of Tcl that isn't widely deployed yet.)
Alas, everything is made more complicated by the need to interact with the subprocesses. There's no way to interact in the background (nor does it really make that much sense). What you instead need to do is to use exactly one interact in your whole program and to have it switch between spawned connections. You do that by giving the -i option the name of a global variable which holds the current id to interact with, instead of the id directly. (This is an “indirect” spawn id.) I think that the easiest way of making this work is to have a “not connected to anything else” spawn id (e.g., you connect it to cat >/dev/null just to act as a do-nothing) that you make at the start of your script, and then swap in the real connection when it makes sense. The actual things that you currently use interact to watch out for are best done with expect_background (remember to use expect_out instead of interact_out).
Your code is rather too long for me to rewrite, but what you should do is to look very carefully at the logic of the eof clause of the interact; it needs to do more than it does at the moment. The code to kill from the GUI should be changed too; it should send a suitable EOF marker to the spawned process(es) to be killed and not wait for the death to be confirmed.