I need to have two way communication between threads in Tcl and all I can get is one way with parameters passing in as my only master->helper communication channel. Here is what I have:
proc ExecProgram { command } {
if { [catch {open "| $command" RDWR} fd ] } {
#
# Failed, return error indication
#
error "$fd"
}
}
To call the tclsh83, for example ExecProgram "tclsh83 testCases.tcl TestCase_01"
Within the testCases.tcl file I can use that passed in information. For example:
set myTestCase [lindex $argv 0]
Within testCases.tcl I can puts out to the pipe:
puts "$myTestCase"
flush stdout
And receive that puts within the master thread by using the process ID:
gets $app line
...within a loop.
Which is not very good. And not two-way.
Anyone know of an easy 2-way communication method for tcl in Windows between 2 threads?
Here is a small example that shows how two processes can communicate. First off the child process (save this as child.tcl):
gets stdin line
puts [string toupper $line]
and then the parent process that starts the child and comunicates with it:
set fd [open "| tclsh child.tcl" r+]
puts $fd "This is a test"
flush $fd
gets $fd line
puts $line
The parent uses the value returned by open to send and receive data to/from the child process; the r+ parameter to open opens the pipeline for both read and write.
The flush is required because of the buffering on the pipeline; it is possible to change this to line buffering using the fconfigure command.
Just one other point; looking at your code you aren't using threads here you are starting a child process. Tcl has a threading extension which does allow proper interthread communications.
Related
I don't know much about PHP or Tcl; but I am trying to learn both concurrently.
In PHP, I read that every script should start with ob_start and, therefore, have been using the following.
ob_start(NULL, 0, PHP_OUTPUT_HANDLER_STDFLAGS);
echo header('Content-Length: '.ob_get_length());
ob_end_flush();
ob_end_clean();
In Tcl channels, I see that the options of -buffering full and -buffersize take care of ob_start() and chan flush is analogous to ob_end_flush() and chan pending output returns the number of bytes written to the output buffer but not yet written out.
I've been looking at my two texts on Tcl and the Tcl manual web page for channels and I can't find a method of just clearing the channel output buffer without writing it.
If data is being written to a channel set to -buffering full and an error is caught/trapped is it possible to empty the buffer and not write it to the channel?
It though perhaps that could use chan seek to set the position back to start similar to setting a pointer back to the beginning of a segment of RAM but the pipe example doesn't appear to create a channel that supports seeking.
lassign [chan pipe] rchan wchan
chan configure $rchan -buffering line -blocking 0 -translation crlf
chan configure $wchan -buffering full -blocking 0 -translation crlf
chan puts $wchan "This is the full messsage which shall attempt to truncate."
chan puts stdout "wchan pending: [chan pending output $wchan]"
chan puts stdout "wchan tell: [chan tell $wchan]"
# => -1 Thus, channel does not support seeking.
#chan seek $wchan 5 start
# => Errors invalid seek
chan flush $wchan
chan puts stdout [chan gets $rchan]
Thank you.
Sounds like you want to only output text written to a channel if no error happens in the middle of writing?
One way is to use a variable channel from tcllib; everything written to the channel is stored in a variable, which can then be written out to the real target on successful completion of whatever you're trying to do.
Example:
#!/usr/bin/env tclsh
package require tcl::chan::variable
proc main {} {
variable output
set output ""
set outputchan [::tcl::chan::variable output]
try {
puts $outputchan "Some text"
error "This is an error"
# Won't get written if an error is raised
chan flush $outputchan
puts -nonewline $output
} on error {errMsg errOptions} {
# Report error if you want
} finally {
chan close $outputchan
}
}
main
I don't think Tcl provides the functionality you are looking for. It's assumed that if you send something to a channel then it should always be written out.
What's the high resolution time axis behavior of TCL 'exec ' ?
I understand that a 'fork' command will be used which will at first create a copy of the memory image of the process and then proceed.
Here's the motivation for my question:
A user gave me following observation. A 64 GB machine has a TCL based tool interface running with 60GB memory used. (let's assume swap is small). At the TCL prompt he gives 'exec ls' and the process crashes with a memory error.
You insight is much appreciated.
Thanks,
Gert
The exec command will call the fork() system call internally. This is usually OK, but might run out of memory when the OS is configured to not swap and the originating Tcl process is very large (or if there is very little slop room; it depends on the actual situation of course).
The ideas I have for reducing memory usage are to either using vfork() (by patching tclUnixPipe.c; you can define USE_VFORK in the makefile to enable that, and I don't know why that isn't used more widely) or by creating a helper process early on (before lots of memory is used) that will do the execs on your main process's behalf. Here's how to do that latter option:
# This is setup done at the start
set forkerProcess [open "|tclsh" r+]
fconfigure $forkerProcess -buffering line -blocking 0
puts $forkerProcess {
fconfigure stdout -buffering none
set tcl_prompt1 ""
set tcl_prompt2 ""
set tcl_interactive 0
proc exechelper args {
catch {exec {*}$args} value options
puts [list [list $value $options]]
}
}
# TRICKY BIT: Yield and drain anything unwanted
after 25
read $forkerProcess
# Call this, just like exec, to run programs without memory hazards
proc do-exec args {
global forkerProcess
fconfigure $forkerProcess -blocking 1
puts $forkerProcess [list exechelper {*}$args]
set result [gets $forkerProcess]
fconfigure $forkerProcess -blocking 0
while {![info complete $result]} {
append result \n [read $forkerProcess]
}
lassign [lindex $result 0] value options
return -options $options $value
}
Is there anyway to run a process in the background while showing the real time updates in the stdout and only saving the last line (tail -n 1 savefile) to a file? There can be anywhere between 1 and 15 tests running at the same time and I need to be able to see that the tests are running but I do not want to save the entire text output.
I should mention since the tests are running in the background I am using a checkpid loop to wait for the tests to finish
also if it helps this is how my script is running the tests...
set runtest [exec -ignorestderr bsub -I -q lin_i make $testvar SEED=1 VPDDUMP=on |tail -n 1 >> $path0/runtestfile &]
I have found that if I use | tee it causes the checkpid loop to skip but if I do |tee it does not display output.
It's going to be better to use a simpler pipeline with explicit management of the output handling in Tcl, instead of using tail -n (and tee) to simulate it.
set pipeline($testvar) [open |[list bsub -I -q lin_i make $testvar SEED=1 VPDDUMP=on]]
fileevent $pipeline($testvar) readable [list handleInput $testvar]
fconfigure $pipeline($testvar) -blocking 0
# The callback for when something is available to be read
proc handleInput {testvar} {
upvar ::pipeline($testvar) chan ::status($testvar) status
if {[gets $chan line] >= 0} {
# OK, we've got an update to the current status; stash in a variable
set status $line
# Echo to stdout
puts $line
return
} elseif {[eof $chan]} {
if {[catch {close $line}]} {
puts "Error from pipeline for '$testvar'"
}
unset chan
# I don't know if you want to do anything else on termination
return
}
# Nothing to do otherwise; don't need to care about very long lines here
}
This code, plus a little vwait to enable event-based processing (assuming you're not also using Tk), will let you read from the pipeline while not preventing you from doing other things. You can even fire off multiple pipelines at once; Tcl will cope just fine. What's more, setting a write trace on the ::status array will let you monitor for changes across all of the pipelines at once.
I have a log which keeps on updating.
I am running a flow that generates a file. This flow runs at the background and
updates the log saying "[12:23:12:1] \m successfully completed (data_01)" .
As soon as I see this comment, i use this file for the next flow.
I created a popup saying "wait till the log says successfully completed", to avoid
script going to next flow and gets aborted.
But the problem is each and every time I need to check the log for that comment and
press OK in the popup.
Is there any way to capture the comment from the updating log.
I tried
set flag 0
while { $flag == 0} {
set fp [open "|tail code.log" r]
set data [ read $fp]
close $fp
set data [ split $data]
if { [ regexp {.*successfully completed.*} $data ]} {
set line $data
set flag 1
} else {
continue
}
}
This $line,i will pass it to the pop up variable so that instead to saying wait until
successfully completed. I will say "Successfully completed" .
But, This is throwing error as too many files opened and also its not waiting.
There's a limit on the number of files that can be opened at once by a process, imposed by the OS. Usually, if you are getting close to that limit then you're doing something rather wrong!
So let's back up a little bit.
The simplest way to read a log file continuously is to open a pipe from the program tail with the -f option passed in, so it only reports things added to the file instead of reporting the end each time it is run. Like this:
set myPipeline [open "|tail -f code.log"]
You can then read from this pipeline and, as long as you don't close it, you will only ever read a line once. Exiting the Tcl process will close the pipe. You can either use a blocking gets to read each line, or a fileevent so that you get a callback when a line is available. This latter form is ideal for a GUI.
Blocking form
while {[gets $myPipeline line] >= 0} {
if {[regexp {successfully completed \(([^()]+)\)} $line -> theFlowName]} {
processFlow $theFlowName
}
}
close $myPipeline
Callback form
Assuming that the pipeline is kept in blocking mode. Full non-blocking is a little more complex but follows a similar pattern.
fileevent $myPipeline readable [list GetOneLine $myPipeline]
proc GetOneLine {pipe} {
if {[gets $pipe line] < 0} {
# IMPORTANT! Close upon EOF to remove the callback!
close $pipe
} elseif {[regexp {successfully completed \(([^()]+)\)} $line -> theFlowName]} {
processFlow $theFlowName
}
}
Both of these forms call processFlow with the bit of the line extract from within the parentheses when that appears in the log. That's the part where it becomes not generic Tcl any more…
It appears that what you want to do is monitor a file and wait without hanging your UI for a particular line to be added to the file. To do this you cannot use the asynchronous IO on the file as in Tcl files are always readable. Instead you need to poll the file on a timer. In Tcl this means using the after command. So create a command that checks the time the file was last modified and if it has been changed since you last checked it, opens the file and looks for your specific data. If the data is present, set some state variable to allow your program to continue to do the next step. If not, you just schedule another call to your check function using after and a suitable interval.
You could use a pipe as you have above but you should use asynchronous IO to read data from the channel when it becomes available. That means using fileevent
I am new to tcl. I want to create a thread in tcl which should keep calling itself in background.
#!/usr/bin/env tclsh
set serPort [open "/dev/ttyS0" RDWR]
fconfigure $serPort -mode 115200,n,8,1 -blocking 0
while { 1 } {
set data [gets $chan]
puts $data
}
I want to avoid using the above while loop and create a repeatable thread for the functionality inside the while loop. Basically i am connecting the COM1 of my PC to a device and getting the serial data from the device. But if there is no data on the port it still doesn't come out of loop even if i use "eof" command. That is the reason i want to create the thread.
I am planning to use Tcl_CreateThread for that but I don't understood how to use it
Don't do that. Instead, use the usual Tcl's idiom for working with non-blocking channels: set up a handler for the "channel readable" event, and enter the event loop; when the device sends data to the port you opened, the OS passes the data to your application and the callback gets called.
A minimal program to demonstrate the concept looks like this:
proc my_read_handler ch {
set data [read $ch]
if {[eof $ch]} {
close $ch
set ::forever done ;# this makes the call to `vwait` below to quit
return
}
# Otherwise process the data here ...
}
set serPort [open "/dev/ttyS0" RDWR]
fconfigure $serPort -mode 115200,n,8,1 -blocking no
fileevent $serPort readable [list my_read_handler $serPort]
vwait ::forever ;# the program enters the event loop here
Read more on this in the examples.
Several observations:
The EOF only happens when the remote side closes. If you call close on your channel, the "readable" even won't be called in this case.
If you're writing a Tk application, it will already have an event loop, so no calls to vwait are necessary (moreover, they're highly advised against, as this will re-enter the event loop): you just open your device, say, in a code which executes when the users clicks a button, set up the readable callback on the acquired channel and then just do the rest of the processing in that callback (as shown above).
Read this (and the links there) for more info on the event-oriented programming. Also search the wiki — it contains lots of examples and background knowledge.