Handle exit command executed by embedded Tcl runtime - tcl

I have a small shell application that embeds Tcl to execute some set of Tcl code. The Tcl interpreter is initialized using Tcl_CreateInterp. Everything is very simple:
user types Tcl command
the command gets passed to Tcl_Eval for evaluation
repeat
But if a user types 'exit', which is a valid Tcl command, the whole thing - Tcl interpreter and my shell application - exit automatically.
Q: is there any way I can catch this exit signal coming from Tcl interpreter. I really would like not to check every user command. I tried Tcl_CreateExitHandler, but it didn't work.
Thanks so much.

Get rid of the command
rename exit ""
Or redefine it to let the user know it's disabled:
proc exit {args} { error "The exit command is not available in this context" }
Also worth considering is running the user's code in a safe interp instead of in the main shell. Doing so would allow you to control exactly what the user has access to.
You might also be able to create a child interp (non-safe) and just disable the exit command for that interp.
Lastly, you could just rename exit to something else, if you're only trying to avoid users typing it by mistake:
namespace eval ::hidden {}
rename exit ::hidden::exit

Rename the exit command:
rename exit __exit
proc exit {args} {
puts -nonewline "Do you really want to exit? (y/n) "
flush stdout
gets stdin answer
if {$answer == "y"} {
__exit [lindex $args 0]
}
}
This way, when the user type exit, he/she will execute your custom exit command, in which you can do anything you like.

Using Tcl_CreateExitHandler works fine. The problem was that I added a printf into the handler implementation and the output didn't show up on the terminal. So I thought it hasn't been called. However, by the time this handler is executed there is no stdout any more. Running strace on the application shows that the handler is being executed fine.
Another solution to this problem can be to use atexit and process the exit event there.

Related

How to make a github workflow error message show up in log and annotations?

I have a workflow step that runs this last stage of a shell command in a loop:
|| echo "::error Filename $filename doesn't match possible files" && exit 1 ; done
The exit is triggered appropriately, but the only annotation I see is:
Error: Process completed with exit code 1.
The log shows the entire shell command pipeline and also that same error message, but not my echo'd error.
How do I get my output, including the $filename variable, included?
You have wrong syntax:
echo "::error::Filename $filename doesn't match possible files"
You need to postfix error with ::
Here is a working example of workflow using my suggestion:
https://github.com/grzegorzkrukowski/stackoverflow_tests/actions/runs/1835152772
There must be something else wrong with your workflow if it doesn't work - other command is exiting with code 1 before it has a chance to execute.

How will I be able to senda ctrl + c in tcl

Is there a way that I could send a Ctrl+C signal a tcl program?
I am having a tcl code in which when I execute it, internally it should undergo through Ctrl+C signal and print something like:
puts "sent ctrl+c" within the same file.
proc abc {
# Want to sent ctrl + c"
Here I want the command for ctrl+c
puts " sent ctrl+c"
}
If you are sending the signal to a program under the control of Expect, you do:
send "\003"
That's literally the character that your keyboard generates immediately when you do Ctrl+C; it gets translated into a signal by the terminal driver.
Otherwise, you need to use the TclX package (or Expect, though you should only use that if you need its full capabilities) which provides a kill command:
package require Tclx
kill SIGINT $theProcessID
# You could also use INT or 15 to specify the signal to send.
# You can provide a list of PIDs instead of just one too.
Knowing what process ID to send to is a matter of keeping track of things when you create the process. The current process's PID is returned by the pid command if you don't give it any arguments. The process ID(s) of the subprocesses created are returned by exec ... & for all the (known) processes in the background pipeline it creates. For pipelines created with open |..., pass the channel handle for the pipeline to the pid command to get the subprocess IDs.
set pipeline [open |[list program1 ... | program2 ... | program3 ...] "r+"]
puts $pipeline "here is some input"
set outputLine [gets $pipeline]
kill SIGINT [pid $pipeline]
# This close *should* probably produce errors; you've killed the subprocesses after all
catch {close $pipeline}
If you're handling the interrupt signal, use the signal command from TclX to do it:
package require Tclx
signal error SIGINT; # Generate a normal Tcl error on signal
signal trap SIGINT {; # Custom signal handler
puts "SIGNALLED!"
exit
}
signal default SIGINT; # Restore default behaviour
If you use signal error SIGINT, the error generated will have this message “SIGINT signal received” and this error code “POSIX SIG SIGINT”. This is easy to test for (especially with Tcl 8.6's try … trap … command).

Communication between Tcl scripts

I was successfully able to redirect the standard output of a script called by my GUI (tcl/tk) using:
exec [info nameofexecutable] jtag.tcl >#$file_id
Here's a description of my system.
Now I want to be able to tell jtag.tcl to stop data acquisition (which is in an infinite loop) when I click "stop" button. Is it possible through exec or should I use open instead?
The exec command waits until the subprocess finishes before returning control to you at all (unless you run totally disconnected in the background). To maintain control you need to open a pipeline:
# Not a read pipe since output is redirected
set pipe [open |[list [info nameofexecutable] jtag.tcl >#$file_id] "w"]
You also need to ensure that the other process listens for when the pipe is closed or have some other protocol for telling the other end to finish. The easiest mechanism to do that is for the remote end to put the pipe (which is its stdin) into non-blocking mode and to check periodically for a quit message.
# Putting the pipe into nonblocking mode
fconfigure stdin -blocking 0
# Testing for a quit message; put this in somewhere it can be called periodically
if {[gets stdin] eq "quit"} {
exit
}
Then the shutdown protocol for the subprocess becomes this in the parent process:
puts $pipe "quit"
close $pipe
Alternatively, kill the subprocess and pick up the results:
exec kill [pid $pipe]
# Need the catch; this will throw an error otherwise because of the signal
catch {close $pipe}

print expect output to the terminal on the fly

I want to run some script on other machine, by doing ssh there. This script might take longer time and it would be better for me if I am able to display what is going on right now in execution step.
But, when I pass to spawn ssh with script, task is completed and it comes out silently. Is there a way that we can print the output from the other machine. I can take a log also, but it can be accessed when job is completed. Can I do it on the fly?
package require Expect
spawn ssh machine
expect -regexp {
"bash" {
send "perl script.pl\r"
}
timeout {
exit
}
eof {
exit
}
interact
}
puts "Done"
I have used, puts $expect_out(buffer); which is giving me no variable error.
Thanks in advance.

Tcl open channel

how does one open a channel that is not a filename in tcl? I've read the docs but I'm not a programmer so I must not understand the open and chan commands because when I try to open a new custom channel
open customchannel1 RDWR
I get errors such as
couldn't execute "customchannel1": no such file or directory
And I'm fully aware that I don't do this correctly:
chan create read customchannel1
invalid command name "customchannel1" ...and... invalid command name "initialize"
All I want is two tcl scripts to be able to talk to each other. I thought I could use channels to do this.
I have, however, successfully created a socket test version of what I want:
proc accept {chan addr port} {
puts "$addr:$port says [gets $chan]"
puts $chan goodbye
close $chan
}
puts -nonewline "master or slave? "
flush stdout
set name [gets stdin]
if {$name eq "master"} {
puts -nonewline "Whats the port? "
flush stdout
set port [gets stdin]
socket -server accept $port
vwait forever
} else {
puts "slave then."
puts -nonewline "Whats the id? "
flush stdout
set myid [gets stdin]
set chan [socket 127.0.0.1 $myid]
puts $chan hello
flush $chan
puts "127.0.0.1:$myid says [gets $chan]"
close $chan
}
In the above example I can run 3 instances of the program: 2 'masters' with different port numbers, and a 'slave' that can talk to either one depending on the port/'id' it chooses.
If I knew how to open a channel with the open command instead of the socket command I could implement the above code without using sockets, or jimmy-rigging the ports to be used as uniq ids, but every example I can find opens files and writes out to files or standard out which you don't have to create in the first place.
Thanks for helping me understand these concepts and how to implement them better!
A channel is simply a high level method for working with already open files or sockets.
From the manual page:
This command provides several operations for reading from, writing to and otherwise manipulating open channels (such as have been created with the open and socket commands, or the default named channels stdin, stdout or stderr which correspond to the process's standard input, output and error streams respectively).
So what you are doing with sockets is correct. You can use the chan command to configure the open socket.
When connecting two scripts together, you might think in terms of using a pipeline. For example, you could run one script as a subordinate process of the other. The master does this:
set pipe [open |[list [info nameofexecutable] $thescriptfile] "r+"]
to get a bidirectional (because r+) pipeline to talk to the child, which can in turn just use stdout and stdin as normal.
Within a process, chan pipe is available, which returns a pair of channels that are connected by an OS anonymous pipe.
When working with these, it really helps if you remember to use fconfigure to turn -buffering to none. Otherwise you can get deadlocks while output to a pipe sits in a buffer somewhere, which you don't want. The ultimate answer to that is to use Expect, which uses Unix ptys instead of pipes, but you can be quite productive provided you remember to tune the buffering.