how to get the parent process id in expect script? - tcl

I want to know the parent processs id of the parent inside expect script how would i do it.
I used tried to use and got this error
[sesiv#itseelm-lx4151 ~]$ invalid command name "id"
while executing
"id process parent "
invoked from within
"set parent_process_id [ id process parent ]"
also tried to do
puts "parent is ---$env(PPID)**"
but it gave me this
[sesiv#itseelm-lx4151 ~]$ can't read "env(PPID)": no such variable
while executing
"puts "parent is ---$env(PPID)**""

The id command is part of the Tclx package, you need to include it:
package require Tclx
Update
Since your system does not have the Tclx package, and based on your prompt, I guess you are running a Unix-like operating system, I am going to offer a solution which employs the ps command. This solution will not work under Windows.
# Returns the parent PID for a process ID (PID)
# If the PID is invalid, return 0
proc getParentPid {{myPid ""}} {
if {$myPid == ""} { set myPid [pid] }
set ps_output [exec ps -o pid,ppid $myPid]
# To parse ps output, note that the output looks like this:
# PID PPID
# 4584 613
# index 0="PID", 1="PPID", 2=myPid, 3=parent pid
set checkPid [lindex $ps_output 2]
set parentPid [lindex $ps_output 3]
if {$checkPid == $myPid} {
return $parentPid
} else {
return 0
}
}
# Test it out
set myPid [pid]
set parentPid [getParentPid]
puts "My PID: $myPid"
puts "Parent PID: $parentPid"

Related

Tcl: how does this proc return a value?

I'm modifying the code below, but I have no idea how it works - enlightenment welcome. The issue is that there is a proc in it (cygwin_prefix) which is meant to create a command, by either
leaving a filename unmodified, or
prepending a string to the filename
The problem is that the proc returns nothing, but the script magically still works. How? Specifically, how does the line set command [cygwin_prefix filter_g] actually manage to correctly set command?
For background, the script simply execs filter_g < foo.txt > foo.txt.temp. However, historically (this no longer seems to be the case) this didn't work on Cygwin, so it instead ran /usr/bin/env tclsh filter_g < foo.txt > foo.txt.temp. The script as shown 'works' on both Linux (Tcl 8.5) and Cygwin (Tcl 8.6).
Thanks.
#!/usr/bin/env tclsh
proc cygwin_prefix { file } {
global cygwin
if {$cygwin} {
set status [catch { set fpath [eval exec which $file] } result ]
if { $status != 0 } {
puts "which error: '$result'"
exit 1
}
set file "/usr/bin/env tclsh $fpath"
}
set file
}
set cygwin 1
set filein foo.txt
set command [cygwin_prefix filter_g]
set command "$command < $filein > $filein.temp"
set status [catch { eval exec $command } result ]
if { $status != 0 } {
puts "filter error: '$result'"
exit 1
}
exit 0
The key to your question is two-fold.
If a procedure doesn't finish with return (or error, of course) the result of the procedure is the result of the last command executed in that procedure's body.
(Or the empty string, if no commands were executed. Doesn't apply in this case.)
This is useful for things like procedures that just wrap commands:
proc randomPick {list} {
lindex $list [expr { int(rand() * [llength $list]) }]
}
Yes, you could add in return […] but it just adds clutter for something so short.
The set command, with one argument, reads the named variable and produces the value inside the var as its result.
A very long time ago (around 30 years now) this was how all variables were read. Fortunately for us, the $… syntax was added which is much more convenient in 99.99% of all cases. The only place left where it's sometimes sensible is with computed variable names, but most of the time there's a better option even there too.
The form you see with set file at the end of a procedure instead of return $file had currency for a while because it produced slightly shorter bytecode. By one unreachable opcode. The difference in bytecode is gone now. There's also no performance difference, and never was (especially by comparison with the weight of exec which launches subprocesses and generally does a lot of system calls!)
It's not required to use eval for exec. Building up a command as a list will protect you from, for example, path items that contain a space. Here's a quick rewrite to demonstrate:
proc cygwin_prefix { file } {
if {$::cygwin} {
set status [catch { set fpath [exec which $file] } result]
if { $status != 0 } {
error "which error: '$result'"
}
set file [list /usr/bin/env tclsh $fpath]
}
return $file
}
set cygwin 1
set filein foo.txt
set command [cygwin_prefix filter_g]
lappend command "<" $filein ">" $filein.temp
set status [catch { exec {*}$command } result]
if { $status != 0 } {
error "filter error: '$result'"
}
This uses {*} to explode the list into individual words to pass to exec.

Tcl [exec] process leaves zombie if the process forks and exits

I have a case when the Tcl script runs a process, which does fork(), leaves the forked process to run, and then the main process exits. You can try it out simply by running any program that forks to background, for example gvim, provided that it is configured to run in background after execution: set res [exec gvim].
The main process theoretically exits immediately, the child process runs in background, but somehow the main process hangs up, doesn't exit, stays in zombie state (reports as <defunct> in ps output).
In my case the process I'm starting prints something, I want that something and I want that the process exit and I state it done. The problem is that if I spawn the process using open "|gvim" r, then I cannot also recognize the moment when the process has finished. The fd returned by [open] never reports [eof], even when the program turns into zombie. When I try to [read], just to read everything that the process might print, it hangs up completely.
What is more interesting, is that occasionally both the main process and the forked process print something and when I'm trying to read it using [gets], I get both. If I close the descriptor too early, then [close] throws an exception due to broken pipe. Probably that's why [read] never ends.
I need some method to recognize the moment when the main process exits, while this process could have spawned another child process, but this child process may be completely detached and I'm not interested what it does. I want something that the main process prints before exitting and the script should continue its work while the process running in background is also running and I'm not interested what happens to it.
I have a control over the sources of the process I'm starting. Yes, I did signal(SIGCLD, SIG_IGN) before fork() - didn't help.
Tcl clears up zombies from background process calls the next time it calls exec. Since a zombie really doesn't use much resources (just an entry in the process table; there's nothing else there really) there isn't a particular hurry to clean them up.
The problem you were having with the pipeline was that you'd not put it in non-blocking mode. To detect exit of a pipeline, you're best off using a fileevent which will fire when either there's a byte (or more) to read from the pipe or when the other end of the pipe is closed. To distinguish these cases, you have to actually try to read, and that can block if you over-read and you're not in non-blocking mode. However, Tcl makes working with non-blocking mode easy.
set pipeline [open |… "r"]
fileevent $pipeline readable [list handlePipeReadable $pipeline]
fconfigure $pipeline -blocking false
proc handlePipeReadable {pipe} {
if {[gets $pipe line] >= 0} {
# Managed to actually read a line; stored in $line now
} elseif {[eof $pipe]} {
# Pipeline was closed; get exit code, etc.
if {[catch {close $pipe} msg opt]} {
set exitinfo [dict get $opt -errorcode]
} else {
# Successful termination
set exitinfo ""
}
# Stop the waiting in [vwait], below
set ::donepipe $pipeline
} else {
# Partial read; things will be properly buffered up for now...
}
}
vwait ::donepipe
Be aware that using gvim in a pipeline is… rather more complex than usual, as it is an application that users interact with.
You might find it easier to run a simple exec in a separate thread, provided your version of Tcl is thread-enabled and the Thread package is installed. (That ought to be the case if you're using 8.6, but I don't know if that's true.)
package require Thread
set runner [thread::create {
proc run {caller targetVariable args} {
set res [catch {
exec {*}$args
} msg opt]
set callback [list set $targetVariable [list $res $msg $opt]]
thread::send -async $caller $callback
}
thread::wait
}]
proc runInBackground {completionVariable args} {
global runner
thread::send -async $runner [list run [thread::id] $completionVariable {*}$args]
}
runInBackground resultsVar gvim …
# You can do other things at this point
# Wait until the variable is set (by callback); alternatively, use a variable trace
vwait resultsVar
# Process the results to extract the sense
lassign $resultsVar res msg opt
puts "code: $res"
puts "output: $msg"
puts "status dictionary: $opt"
For all that, for an editor like gvim I'd actually expect it to be run in the foreground (which doesn't require anything like as much complexity) since only one of them can really interact with a particular terminal at once.
Your daemon can also call setsid() and setpgrp() to start a new session and to detach from the process group. But these don't help with your problem either.
You will have to do some process management:
#!/usr/bin/tclsh
proc waitpid {pid} {
set rc [catch {exec -- kill -0 $pid}]
while { $rc == 0 } {
set ::waitflag 0
after 100 [list set ::waitflag 1]
vwait ::waitflag
set rc [catch {exec -- kill -0 $pid}]
}
}
set pid [exec ./t1 &]
waitpid $pid
puts "exit tcl"
exit
Edit: Another unreasonable answer
If the forked child process closes the open channels, Tcl will not wait on it.
Test program:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <signal.h>
int
main (int argc, char *argv [])
{
int pid;
FILE *o;
signal (SIGCHLD, SIG_IGN);
pid = fork ();
if (pid == 0) {
/* should also call setsid() and setpgrp() to daemonize */
printf ("child\n");
fclose (stdout);
fclose (stderr);
sleep (10);
o = fopen ("/dev/tty", "w");
fprintf (o, "child exit\n");
fclose (o);
} else {
printf ("parent\n");
sleep (2);
}
printf ("t1 exit %d\n", pid);
return 0;
}
Test Tcl program:
#!/usr/bin/tclsh
puts [exec ./t1]
puts "exit tcl"
At first you say:
I need some method to recognize the moment when the main process exits, while this process could have spawned another child process, but this child process may be completely detached and I'm not interested what it does.
later on you say:
If the forked child process closes the open channels, Tcl will not wait on it.
these are two contradictory statements. One one hand you are only interested in the parent process and on the other whether or not the child has finished even thought you also state you aren't interested in child processes that have detached. Last I heard forking and closing the childs copies of the parents stdin,stdout and stderr is detaching (i.e.daemonizing the child process ). I wrote this quick program to run the above included simple c program and as expected tcl knows nothing of the child process. I called the compiled version of the program /tmp/compile/chuck. I did not have gvim so I used emacs but as emacs does not generate text I wrap the exec in its own tcl script and exec that. In both cases, the parent process is waited for and eof is detected. When the parent exits the Runner::getData runs and the clean up is evaluated.
#!/bin/sh
exec /opt/usr8.6.3/bin/tclsh8.6 "$0" ${1+"$#"}
namespace eval Runner {
variable close
variable watch
variable lastpid ""
array set close {}
array set watch {}
proc run { program { message "" } } {
variable watch
variable close
variable lastpid
if { $message ne "" } {
set fname "/tmp/[lindex $program 0 ]-[pid].tcl"
set out [ open $fname "w" ]
puts $out "#![info nameofexecutable]"
puts $out " catch { exec $program } err "
puts $out "puts \"\$err\n$message\""
close $out
file attributes $fname -permissions 00777
set fd [ open "|$fname " "r" ]
set close([pid $fd]) "file delete -force $fname "
} else {
set fd [ open "|$program" "r" ]
set close([pid $fd]) "puts \"cleanup\""
}
fconfigure $fd -blocking 0 -buffering none
fileevent $fd readable [ list Runner::getData [ pid $fd ] $fd ]
}
proc getData { pid chan } {
variable watch
variable close
variable lastpid
set data [read $chan]
append watch($pid) "$data"
if {[eof $chan]} {
catch { close $chan }
eval $close($pid) ; # cleanup
set lastpid $pid
}
}
}
Runner::run /tmp/compile/chuck ""
Runner::run emacs " Emacs complete"
while { 1 } {
vwait Runner::lastpid
set p $Runner::lastpid
catch { exec ps -ef | grep chuck } output
puts "program with pid $p just ended"
puts "$Runner::watch($p)"
puts " processes that match chuck "
puts "$output"
}
Output :
note I exited out of emacs after the child reported that it was exiting.
[user1#linuxrocks workspace]$ ./test.tcl
cleanup
program with pid 27667 just ended
child
parent
t1 exit 27670
processes that match chuck avahi 936 1 0 2016 ?
00:04:35 avahi-daemon: running [linuxrocks.local] admin 27992 1 0
19:37 pts/0 00:00:00 /tmp/compile/chuck admin 28006 27988 0
19:37 pts/0 00:00:00 grep chuck
child exit
program with pid 27669 just ended
Emacs complete
Ok, I found the solution after a long discussion here:
https://groups.google.com/forum/#!topic/comp.lang.tcl/rtaTOC95NJ0
The below script demonstrates how this problem can be solved:
#!/usr/bin/tclsh
lassign [chan pipe] input output
chan configure $input -blocking no -buffering line ;# just for a case :)
puts "Running $argv..."
set ret [exec {*}$argv 2>#stderr >#$output]
puts "Waiting for finished process..."
set line [gets $input]
puts "FIRST LINE: $line"
puts "DONE. PROCESSES:"
puts [exec ps -ef | grep [lindex $argv 0]]
puts "EXITING."
The only problem that remains is that there's still no possibility to know that the process has exited, however the next [exec] (in this particular case probably the [exec ps...] command did this) cleans up the zombie (No universal method for that - the best you can do on POSIX systems is [exec /bin/true]). In my case it was enough that I get one line that the parent process had to print, after which I can simply "let it go".
Still, it would be nice if [exec] can return me somehow the PID of the first process and there's a standard [wait] command that can block until the process exits or check its running state (this command is currently available in TclX).
Note that [chan pipe] is available only in Tcl 8.6, you can use [pipe] from TclX alternatively.

handling multiple processes in tcl/expect

I am trying to deal with two processes, which have to run simultaneously in expect. However, I keep getting the message that one of those processes does not exist.
Here is a minimal (not) working example (I am not really working with ftp, but thats something that will run for other people):
#!/usr/bin/expect
set spawn_id_bash [spawn /bin/bash]
set spawn_id_ftp [spawn ftp ftp.ccc.de]
send "anonymous\n"
expect {
"*Password*" {
puts "\nftp works"
}
default {
puts "\nftp defaulted"
}
}
set spawn_id $spawn_id_bash
send "uname\n"
expect {
"*Linux*" {
puts "\nbash works"
}
default {
puts "\nbash defaulted"
}
}
Unfortunately, the output is:
[martin#martin linuxhome]$ /tmp/blub.tcl
spawn /bin/bash
spawn ftp ftp.ccc.de
anonymous
Trying 212.201.68.160...
Connected to ftp.ccc.de (212.201.68.160).
220-+-+-+-+-+-+-+-+-+
220-|o|b|s|o|l|e|t|e|
220-+-+-+-+-+-+-+-+-+
220-
220-
220-Please use HTTP instead:
220-
220-* http://cdn.media.ccc.de
220
Name (ftp.ccc.de:martin): 331 Please specify the password.
Password:ftp works
can not find channel named "4648"
while executing
"send "uname\n""
(file "/tmp/blub.tcl" line 19)
I have followed the book "Exploring Expect" while writing this example and I do not see what I do differently.
I also tried using send -i and expect -i without any luck (the error message is gone, but otherwise -i seems to be ignored).
spawn returns the unix process id (PID, an integer), not the spawn_id (a string). For example:
# cat foo.exp
send_user "[spawn -noecho sleep 1] $spawn_id\n"
expect eof
# expect foo.exp
20039 exp6
#
You should write like this:
spawn /bin/bash
set spawn_id_bash $spawn_id
spawn ftp ftp.ccc.de
set spawn_id_ftp $spawn_id
Then you can use expect -i and send -i.

Spawn multiple telnet with tcl and log the output separately

I'm trying to telnet to multiple servers with spawn & i want to log the output of each in a separate files.
If i use the spawn with 'logfile' then, it is logging into a same file. But i want to have it in different files. How to do this?
Expect's logging support (i.e., what the log_file command controls) doesn't let you set different logging destinations for different spawn IDs. This means that the simplest mechanism for doing what you want is to run each of the expect sessions in a separate process, which shouldn't be too hard provided you don't use the interact command. (The idea of needing to interact with multiple remote sessions at once is a bit strange! By the time you've made it sensible by grafting in something like the screen program, you might as well be using separate expect scripts anyway.)
In the simplest case, your outer script can be just:
foreach host {foo.example.com bar.example.com grill.example.com} {
exec expect myExpectScript.tcl $host >#stdout 2>#stderr &
}
(The >#stdout 2>#stderr & does “run in the background with stdout and stderr connected to the usual overall destinations.)
Things get quite a bit more complicated if you want to automatically hand information about between the expect sessions. I hope that simple is good enough…
I have found something from the link
http://www.highwind.se/?p=116
LogScript.tcl
#!/usr/bin/tclsh8.5
package require Expect
proc log_by_trace {array element op} {
uplevel {
global logfile
set file $logfile($expect_out(spawn_id))
puts -nonewline $file $expect_out(buffer)
}
}
array set spawns {}
array set logfile {}
# Spawn 1
spawn ./p1.sh
set spawns(one) $spawn_id
set logfile($spawn_id) [open "./log1" w]
# Spawn 2
spawn ./p2.sh
set spawns(two) $spawn_id
set logfile($spawn_id) [open "./log2" w]
trace add variable expect_out(buffer) write log_by_trace
proc flush_logs {} {
global expect_out
global spawns
set timeout 1
foreach {alias spawn_id} [array get spawns] {
expect {
-re ".+" {exp_continue -continue_timer}
default { }
}
}
}
exit -onexit flush_logs
set timeout 5
expect {
-i $spawns(one) "P1:2" {puts "Spawn1 got 2"; exp_continue}
-i $spawns(two) "P2:2" {puts "spawn2 got 2"; exp_continue}
}
p1.sh
#!/bin/bash
i=0
while sleep 1; do
echo P1:$i
let i++
done
p2.sh
#!/bin/bash
i=0
while sleep 1; do
echo P2:$i
let i++
done
It is working perfectly :)

Running an external program from a Tcl script

I would like to export a variable depending on result of a binary command. My TCL script is this:
set A ""
exec sh -c "export A=\"`/usr/local/cuda/samples/1_Utilities/deviceQuery/deviceQuery -noprompt | grep ^Device | wc -l`\""
puts $A
if { $A == "1" } {
set CUDA_VISIBLES_DEVICES 0
} else {
set CUDA_VISIBLES_DEVICES 1
}
With this script, when I execute puts $A I don't get anything in terminal... so in if command I don't know what I evaluating...
My "export" must return ONLY 1 or 0...
Sorry about my poor TCL level.
Thanks.
I guess what you want is something like this:
set a [exec /usr/local/cuda/samples/1_Utilities/deviceQuery/deviceQuery -noprompt | grep ^Device | wc -l]
You set variable a in TCL context and assign the command's return value (i.e. the output text) to it.
The problem is that your exec'd command runs in its own process, so when you set a variable A, that A only exists as a shell variable for the life of that process. When the process exits, A goes away.
The exec command returns the stdout of the command that was exec'd. If you want the result of the command to be in a Tcl variable, you need to set the variable to be the result of the exec:
set A [exec ...]
For more information on the exec command, see the exec man page.