Write to stdout, but save tail -n 1 to a file - tcl

Is there anyway to run a process in the background while showing the real time updates in the stdout and only saving the last line (tail -n 1 savefile) to a file? There can be anywhere between 1 and 15 tests running at the same time and I need to be able to see that the tests are running but I do not want to save the entire text output.
I should mention since the tests are running in the background I am using a checkpid loop to wait for the tests to finish
also if it helps this is how my script is running the tests...
set runtest [exec -ignorestderr bsub -I -q lin_i make $testvar SEED=1 VPDDUMP=on |tail -n 1 >> $path0/runtestfile &]
I have found that if I use | tee it causes the checkpid loop to skip but if I do |tee it does not display output.

It's going to be better to use a simpler pipeline with explicit management of the output handling in Tcl, instead of using tail -n (and tee) to simulate it.
set pipeline($testvar) [open |[list bsub -I -q lin_i make $testvar SEED=1 VPDDUMP=on]]
fileevent $pipeline($testvar) readable [list handleInput $testvar]
fconfigure $pipeline($testvar) -blocking 0
# The callback for when something is available to be read
proc handleInput {testvar} {
upvar ::pipeline($testvar) chan ::status($testvar) status
if {[gets $chan line] >= 0} {
# OK, we've got an update to the current status; stash in a variable
set status $line
# Echo to stdout
puts $line
return
} elseif {[eof $chan]} {
if {[catch {close $line}]} {
puts "Error from pipeline for '$testvar'"
}
unset chan
# I don't know if you want to do anything else on termination
return
}
# Nothing to do otherwise; don't need to care about very long lines here
}
This code, plus a little vwait to enable event-based processing (assuming you're not also using Tk), will let you read from the pipeline while not preventing you from doing other things. You can even fire off multiple pipelines at once; Tcl will cope just fine. What's more, setting a write trace on the ::status array will let you monitor for changes across all of the pipelines at once.

Related

Redirecting output of tcl proc to file and output (like tee) Part 2

I am using tee from https://wiki.tcl-lang.org/page/Tee to redirect file output from my procedures. I need to redirect both stdout and stderr to the file.
Using the input from Redirecting output of tcl proc to file and output (like tee) I arrived at doing the following:
set LogFile [open ${LogFileName} w]
tee channel stderr $LogFile
tee channel stdout $LogFile
set BuildErrorCode [catch {LocalBuild $BuildName $Path_Or_File} BuildErrMsg]
set BuildErrorInfo $::errorInfo
# Restore stdout and stderr
chan pop stdout
chan pop stderr
# Handle errors from Build ...
I am testing this on three different EDA tools and I have three different issues.
When I run from tclsh (on MSYS2 running on Windows 10) and run either the open source simulator GHDL, ModelSim, or QuestaSim, all the even characters are the NUL character.
If I run ModelSim or QuestaSim from the GUI, I miss the output of each command. Shouldn't that be going to either stdout or stderr?
In Riviera-PRO, I am getting extraneous characters that were previously printed. They are generally the second half of a word.
Am I doing something wrong? I tested out the above code using:
set LogFile [open test_tee.log w]
tee channel stderr $LogFile
tee channel stdout $LogFile
puts "Hello, World!"
puts stderr "Error Channel"
puts stdout "Output Channel"
chan pop stdout
chan pop stderr
And this works well.
I am hoping to find something that works in the general case for all tools rather than having to write a different handler for each tool.
============ Update =============
For #1 above, with #Shawn's suggestion, I tried the following and it did not work.
set LogFile [open ${LogFileName} w]
chan configure $LogFile -encoding ascii
. . .
I also tried the following and it did not work.
set LogFile [open ${LogFileName} w]
fconfigure $LogFile -encoding ascii
. . .
Then I tried updating the write in tee to the following and it did not work:
proc tee::write {fd handle buffer} {
puts -nonewline $fd [encoding convertto ascii $buffer]
return $buffer
}
Any other hints solutions appreciated
============ Update2 =============
I have successfully removed the nul characters by doing the following, except now I have an extra newline. Still not a solution.
proc tee::write {fd handle buffer} {
puts -nonewline $fd [regsub -all \x00 $buffer ""]
return $buffer
}
The extra NUL bytes are probably because the stdout ahd steer channels are being written in UTF-16 (the main use for that encoding is the console on Windows). The tee interceptors you are using come after the data being written is encoded. There's a few ways to fix it, but the easiest is to open the file with the right encoding when reading it.
The output of the commands is not necessarily written to those channels. Code written in C or C++ is entirely free to write directly, and Tcl code cannot see that; it's all happening behind our back. Command results can be intercepted using execution traces, but that cannot see anything that the commands internally print that aren't routed via the Tcl library somehow. (There are a few more options on Unix due to the different ways that the OS handles I/O.)
Don't know what's happening with the extra characters. I can tell you that you are getting what goes through the channel, but there are too many tricks (especially in interactive use!) for a useful guess on that front.

Redirecting output of tcl proc to file and output (like tee)

I need redirect that I found in one of my searches:
redirect -tee LogFile.log {include ../RunDemoTests.tcl}
Where include is a TCL proc and ../RunDemoTests.tcl is a parameter to the proc. Is there a library I need to be able to use redirect or is this not general tcl?
I am working in an EDA tool environment that runs under both Windows and Linux, so I need a solution that is just TCL and does not rely on something from the OS.
I have tried numerous variations of:
set ch [open |[list include "../OsvvmLibraries/UART/RunDemoTests.tcl"] r]
set lf [open build.log w]
puts "Starting"
puts $lf "Starting"
while {[gets $ch line]>=0} {
puts $line
puts $lf $line
}
close $lf
However, this only seems to work when the command is something from the OS environment, such as:
set ch [open |[list cat ../tests.pro] r]
Printing from this can be a significant number of lines, buffering is ok, but not collecting the whole file and then printing as the files can be long (180K lines).
In response to a question on comp.lang.tcl a while ago, I created a small Tcl module to provide tee-like functionality in Tcl. I have now published the code on the Tcl wiki.
You would use it like this:
package require tee
tee stdout build.log
try {
puts "Starting"
include ../OsvvmLibraries/UART/RunDemoTests.tcl
} finally {
# Make sure the tee filter is always popped from the filter stack
chan pop stdout
}
This assumes the include RunDemoTests.tcl command produces output to stdout.

Using Spawn-Expect mechanism in TCL-8.5

set pipeline [open "|Certify.exe args" "r"]
fconfigure $pipeline -blocking false
fconfigure $pipeline -buffering none
fileevent $pipeline readable [list handlePipeReadable $pipeline]
proc handlePipeReadable {pipe} {
if {[gets $pipe line] >= 0} {
# Managed to actually read a line; stored in $line now
} elseif {[eof $pipe]} {
# Pipeline was closed; get exit code, etc.
if {[catch {close $pipe} msg opt]} {
set exitinfo [dict get $opt -errorcode]
} else {
# Successful termination
set exitinfo ""
}
# Stop the waiting in [vwait], below
set ::donepipe $pipe
} else {
puts ""
# Partial read; things will be properly buffered up for now...
}
}
vwait ::donepipe
I have tried using pipe in TCL code. But for some reason, I want to convert this to Spawn- Expect mechanism. But I am grappling with it and facing issues when doing so. Can anyone please help me out??
Expect makes the pattern of usage very different and it uses a different way of interacting with the wrapped program that's much more like how interactive usage works (which stops a whole class of buffering-related bugs, which I suspect may be what you're hitting). Because of that, converting things over is not a drop-in change. Here's the basic pattern of use in a simple case:
package require Expect
# Note: different words become different arguments here
spawn Certify.exe args
expect "some sort of prompt string"
send "your input\r"; # \r is *CARRIAGE RETURN*
expect "something else"
send "something else\r"
expect eof
close
The real complexity comes when you can set up timeouts, wait for multiple things at once, wait for patterns as well as literal strings, etc. But doing the same from ordinary Tcl (even ignoring the buffering problems) is much more work. It's also almost always more work in virtually every other language.
Note that Expect doesn't do GUI automation. Just command-line programs. GUI automation is a much more complex topic.
It's not possible to give generic descriptions of what might be done as it depends so much on what the Certify.exe program actually does, and how you work with it interactively.

Time resolved memory footprint of TCL exec

What's the high resolution time axis behavior of TCL 'exec ' ?
I understand that a 'fork' command will be used which will at first create a copy of the memory image of the process and then proceed.
Here's the motivation for my question:
A user gave me following observation. A 64 GB machine has a TCL based tool interface running with 60GB memory used. (let's assume swap is small). At the TCL prompt he gives 'exec ls' and the process crashes with a memory error.
You insight is much appreciated.
Thanks,
Gert
The exec command will call the fork() system call internally. This is usually OK, but might run out of memory when the OS is configured to not swap and the originating Tcl process is very large (or if there is very little slop room; it depends on the actual situation of course).
The ideas I have for reducing memory usage are to either using vfork() (by patching tclUnixPipe.c; you can define USE_VFORK in the makefile to enable that, and I don't know why that isn't used more widely) or by creating a helper process early on (before lots of memory is used) that will do the execs on your main process's behalf. Here's how to do that latter option:
# This is setup done at the start
set forkerProcess [open "|tclsh" r+]
fconfigure $forkerProcess -buffering line -blocking 0
puts $forkerProcess {
fconfigure stdout -buffering none
set tcl_prompt1 ""
set tcl_prompt2 ""
set tcl_interactive 0
proc exechelper args {
catch {exec {*}$args} value options
puts [list [list $value $options]]
}
}
# TRICKY BIT: Yield and drain anything unwanted
after 25
read $forkerProcess
# Call this, just like exec, to run programs without memory hazards
proc do-exec args {
global forkerProcess
fconfigure $forkerProcess -blocking 1
puts $forkerProcess [list exechelper {*}$args]
set result [gets $forkerProcess]
fconfigure $forkerProcess -blocking 0
while {![info complete $result]} {
append result \n [read $forkerProcess]
}
lassign [lindex $result 0] value options
return -options $options $value
}

TCL: Two way communication between threads in Windows

I need to have two way communication between threads in Tcl and all I can get is one way with parameters passing in as my only master->helper communication channel. Here is what I have:
proc ExecProgram { command } {
if { [catch {open "| $command" RDWR} fd ] } {
#
# Failed, return error indication
#
error "$fd"
}
}
To call the tclsh83, for example ExecProgram "tclsh83 testCases.tcl TestCase_01"
Within the testCases.tcl file I can use that passed in information. For example:
set myTestCase [lindex $argv 0]
Within testCases.tcl I can puts out to the pipe:
puts "$myTestCase"
flush stdout
And receive that puts within the master thread by using the process ID:
gets $app line
...within a loop.
Which is not very good. And not two-way.
Anyone know of an easy 2-way communication method for tcl in Windows between 2 threads?
Here is a small example that shows how two processes can communicate. First off the child process (save this as child.tcl):
gets stdin line
puts [string toupper $line]
and then the parent process that starts the child and comunicates with it:
set fd [open "| tclsh child.tcl" r+]
puts $fd "This is a test"
flush $fd
gets $fd line
puts $line
The parent uses the value returned by open to send and receive data to/from the child process; the r+ parameter to open opens the pipeline for both read and write.
The flush is required because of the buffering on the pipeline; it is possible to change this to line buffering using the fconfigure command.
Just one other point; looking at your code you aren't using threads here you are starting a child process. Tcl has a threading extension which does allow proper interthread communications.