I have a work for TCL script.
In the main script, I can invoke a new script by using exec to do it
set AAA [exec tclsh "scriptA.tcl"]
and I can terminate the subprocess by press enter to kill the process.
fileevent stdin readable killproc
vwait state
Now I want to do let it kill the correspond process automatically if my
subprocess working functionally to finished and keep the press enter kill proc
work, how can I implement by not using open or bgexec, thanks
When you do the first one:
set AAA [exec tclsh "scriptA.tcl"]
The Tcl program (or at least the current thread in that program) in which you've put that statement does not proceed until the subprocess has completed. Tcl stops and waits for it.
The other alternative (that doesn't use open or bgexec) is to put a & as the last argument:
set AAA [exec tclsh "scriptA.tcl" &]
However, in this case the subprocess is started in the background and there's no real connection to the master process, which continues immediately. The AAA variable in this case wouldn't contain the output from the program, but rather the process ID of the subprocess; you can use that ID with your platform's usual system tools to monitor it. (Many unixes let you look at /proc/$processID to find out information about running processes. Not all though; it isn't standardised, and the contents of that directory really isn't standardised. You could also look at what tools the TclX package provides; several of them can use a process ID, provided you know how to use the POSIX system calls that it wraps.)
In Tcl 8.6, we added other options for monitoring and handling a subprocess: you can make a genuine OS-understood unidirectional raw unnamed pipe with chan pipe (it's hooked up to a pair of channels), and can close just one end of a bidirectional channel using an extra argument to close. This lets you do things like using a fileevent to monitor a subprocess even if you made it with exec, but it's not really much simpler at that point than using open. The exec command supports connecting channels directly to subprocesses, but currently only for channels that have real OS handles associated with them (pipes, yes; sockets, yes; terminals, yes; files, yes; elaborate script-driven channels made with chan create, no).
In general, we recommend that you keep things simple:
exec … for when you want to run a subprocess and wait immediately for the result.
open |[list …] for when you want to run a subprocess in the background and interact with it (or bgexec, which provides a slightly different interface to the same functionality). Remember that Tcl's fileevent works with pipelines and terminals as well as sockets.
exec … & for when you want to run a subprocess in the background and completely relinquish control over it.
And in the really complicated cases, there's the Expect package.
Related
How can i use write ('w') and read ('r') access while using command pipeline in open command in TCL.
when i do something like :
set f1 [open "| ls -l" w]
it returns a file descriptor to write to , say file1.
Now I am confused how can I put this file descriptor to my use.
PS : My example might be wrong, and in that case it'd be ideal if answer includes a programming example so that it'll be more clear.
Thanks
In general, the key things you can do with a channel are write to it (using puts), read from it (using gets and read), and close it. Obviously, you can only write to it if it is writable, and only read from it if it is readable.
When you write to a channel that is implemented as a pipeline, you send data to the program on the other end of the pipe; that's usually consuming it as its standard input. Not all programs do that; ls is one of the ones that completely ignores its standard input.
But the other thing you can do, as I said above, is close the channel. When you close a pipeline, Tcl waits for all the subprocesses to terminate (if they haven't already) and collects their standard error output, which becomes an error message from close if there is anything. (The errors are just like those you can get from calling exec; the underlying machinery is shared.)
There's no real point in running ls in a pure writable pipeline, at least not unless you redirect its output. Its whole purpose is to produce output (the sorted list of files, together with extra details with the -l option). If you want to get the output, you'll need a readable channel (readable from the perspective of Tcl): open "| ls -l" r. Then you'll be able to use gets $f1 to read a line from the subprocess.
But since ls is entirely non-interactive and almost always has a very quick running time (unless your directories are huge or you pass the options to enable recursion), you might as well just use exec. This does not apply to other programs. Not necessarily anyway; you need to understand what's going on.
If you want to experiment with pipelines, try using sort -u as the subprocess. That takes input and produces output, and exhibits all sorts of annoying behaviour along the way! Understanding how to work with it will teach you a lot about how program automation can be tricky despite it really being very simple.
I have just started to use the tcl language and I need to create a script with several functions triggered every 2 seconds. I have been searching an answer on the internet and found several topics about it. For instance, I found this code on StackOverlow (How do I use "after ms script" in TCL?):
#!/usr/bin/tclsh
proc async {countdown} {
puts $countdown
incr countdown -1
if {$countdown > 0} {
after 1000 "async $countdown"
} else {
after 1000 {puts Blastoff!; exit}
}
}
async 5
# Don't exit the program and let async requests
# finish.
vwait forever
This code could very easily be adapted to what I want to do but it doesn't work on my computer. When I copy paste it on my IDE, the code waits several second before giving all the outputs in one go.
I had the same problem with the other code I found on the internet.
It would be great if someone could help me.
Thanks a lot in advance.
I've just pasted the exact script that you gave into a tclsh (specifically 8.6) running in a terminal on macOS, and it works. I would anticipate that your script will work on any version from about Tcl 7.6 onwards, which is going back nearly 25 years.
It sounds instead like your IDE is somehow causing the output to be buffered. Within your Tcl script, you can probably fix that by either putting flush stdout after each puts call, or by the (much easier) option of putting this at the start of your script:
fconfigure stdout -buffering line
# Or do this if you're using partial line writes:
# fconfigure stdout -buffering none
The issue is that Tcl (in common with many other programs) detects whether its standard output is going to a terminal or some other destination (file or pipe or socket or …). When output is to a terminal, it sets the buffering mode to line and otherwise it is set to full. (By contrast, stderr always has none buffering by default so that whatever errors that occur make it out before a crash; there's nothing worse than losing debugging info by default.) When lots of output is being sent, it doesn't matter — the buffer is only a few kB long and this is a very good performance booster — but it's not what you want when only writing a very small amount at time. It sounds like the IDE is doing something (probably using a pipe) that's causing the guess to be wrong.
(The tcl_interactive global variable is formally unrelated; that's set when there's no script argument. The buffering rule applies even when you give a script as an argument.)
The truly correct way for the IDE to fix this, at least on POSIX systems, is for it to use virtual terminal to run scripts instead of a pipeline. But that's a much more complex topic!
Reading about bash exec one can create and redirect pipes other than the standard ones, e.g. exec 3>4.
Reading about Tcl exec there is no mentioning of non-standard pipes. Seems explicit.
The use case is a launcher starting many executables communicating over multiple pipes (possibly circular fashion). I was thinking something like:
lassign [chan pipe] a2b_read a2b_write
exec a 3 3>#$a2b_write
exec b 3 3<#$a2b_read
...
...where 'a' is an executable taking a file descriptor argument controlling where a should write stuff, and vice versa for executable 'b'. Using the standard pipes does not work when executable communicate over multiple pipes.
I know how to do this using a named pipe, but would much rather tie pipe lifetime to that of the process'.
Tcl has no built-in binding for dup() at all, and only uses dup2() in a very limited way (only for the three standard channels). Without those, this functionality is not going to work. This is where you need TclX, where you can take full control of the channel handling and process launching and do whatever you want (via fork, dup and execl; note that that's not at all like exec and much more like the POSIX system call).
Or do the trickery in a subordinate shell script.
Within a tcl program I am trying to exec a program which takes input using <
puts [exec "./program < inputfile"]
However, this produces the error
couldn't execute "./program < inputfile": no such file or directory
Is there a way of doing this in tcl?
Tcl can process redirections itself, so you would write:
puts [exec ./program <inputfile]
Otherwise, it tries to interpret the whole thing as a single (somewhat strange) filename; legal (on Unix, not on Windows) but not what you wanted.
Alternatively, you can fire the whole thing off through the system shell:
puts [exec /bin/sh -c "./program < inputfile"]
That works, but has many caveats. In particular, quoting things for the shell is a non-trivial problem, and you're not portable to Windows (where the incantation for running things through the command-line processor is a bit different). You also have an extra process used, but that's not really a big problem in practice unless you're really stretching the limits of system resources.
The plus side is that you can use full shell syntax in there, which can do a few things that are downright awkward otherwise. It's also more widely known, so something I'd surface to users with a medium level of expertise. (New users should stick to canned stuff; real experts can write some Tcl scripts.)
I found out the answer, the command has to be written as
puts [exec "./program" < inputfile]
Is there a way to make Tcl interpreter source a file and open a pipe from shell command parallel?
In more details, I have a GUI built from tcl/tk. I want my tcl script to source a setting file for GUI variables, and at the same time, open a pipe from [tclsh setting_file] to redirect the output to my GUI stdout.
Thank you very much!
I'm not convinced that running the processing of the settings command in a subprocess is a good idea. Maybe a safe interpreter would be better?
Re trapping the output, you could pick a technique for doing stdout capture and then show the contents of the captured buffer in the GUI (after using encoding convertfrom to get the characters back if you're using my solution to that problem) but you've got a general issue that it is possible for user code to block things up if it takes a long time to run. You could work around that by using threads, but I suspect it is easier to avoid the complexity and to just let badly-written setup code cause problems that the user will have to fix. (The catch command can help you recover from any outright errors during the sourcing of the settings file.)