I want my ksh script to stop on unexpected error. I also use functions that I nest to reuse recurrent code.
So I use errexit feature ('set -e'). The script crashes on error -> great.
I sometime want to catch the error instead of crashing.
I have a function func. I want that (requirements):
in the function: it should stop on error
in the parent: it should also stop on error
except that in the parent, error when executing 'function' should be caught
I coded this:
# parent script
func()
{ (
set -e # function should crash on 1st error
rm foo ; rm foo # this will trigger error
echo "Since -e is set, this part of the script should never get reached"
) }
set -e # parent script should crash on 1st error
echo "do stuff"
# we want to execute 'function' and catch error
if func ; then
echo "Execution of func was ok"
else
echo "Execution of func was not ok"
fi
echo "do more stuff"
It does not work.
do stuff
rm: impossible de supprimer 'foo': No such file or directory
rm: impossible de supprimer 'foo': No such file or directory
Since -e is set, this part of the script should never get reached
Execution of func was ok
do more stuff
Since 'func' is executed within a test (if func), the errexit is disabled. This means that errors inside func do not crash the function. The function proceeds and exits with status 0, and the parent is not notified of the error. Even if I explicitly set again errexit inside func, the errexit is not enabled.
If I do not execute func in a test, the errexit kicks in and the parent crashes: I cannot catch the exception (in other words: I'm func'ed). I could temporary disable errexit around the call of func, but it seems like an unclean workaround.
Any idea how I could reach my requirements?
Have you tried running a search on "set -e" (include quotes)?
Or perhaps a google search, eg: ksh "set -e" site:stackoverflow.com
... several hits/matches/proposed-answers.
Related
I have a workflow step that runs this last stage of a shell command in a loop:
|| echo "::error Filename $filename doesn't match possible files" && exit 1 ; done
The exit is triggered appropriately, but the only annotation I see is:
Error: Process completed with exit code 1.
The log shows the entire shell command pipeline and also that same error message, but not my echo'd error.
How do I get my output, including the $filename variable, included?
You have wrong syntax:
echo "::error::Filename $filename doesn't match possible files"
You need to postfix error with ::
Here is a working example of workflow using my suggestion:
https://github.com/grzegorzkrukowski/stackoverflow_tests/actions/runs/1835152772
There must be something else wrong with your workflow if it doesn't work - other command is exiting with code 1 before it has a chance to execute.
Is there a way that I could send a Ctrl+C signal a tcl program?
I am having a tcl code in which when I execute it, internally it should undergo through Ctrl+C signal and print something like:
puts "sent ctrl+c" within the same file.
proc abc {
# Want to sent ctrl + c"
Here I want the command for ctrl+c
puts " sent ctrl+c"
}
If you are sending the signal to a program under the control of Expect, you do:
send "\003"
That's literally the character that your keyboard generates immediately when you do Ctrl+C; it gets translated into a signal by the terminal driver.
Otherwise, you need to use the TclX package (or Expect, though you should only use that if you need its full capabilities) which provides a kill command:
package require Tclx
kill SIGINT $theProcessID
# You could also use INT or 15 to specify the signal to send.
# You can provide a list of PIDs instead of just one too.
Knowing what process ID to send to is a matter of keeping track of things when you create the process. The current process's PID is returned by the pid command if you don't give it any arguments. The process ID(s) of the subprocesses created are returned by exec ... & for all the (known) processes in the background pipeline it creates. For pipelines created with open |..., pass the channel handle for the pipeline to the pid command to get the subprocess IDs.
set pipeline [open |[list program1 ... | program2 ... | program3 ...] "r+"]
puts $pipeline "here is some input"
set outputLine [gets $pipeline]
kill SIGINT [pid $pipeline]
# This close *should* probably produce errors; you've killed the subprocesses after all
catch {close $pipeline}
If you're handling the interrupt signal, use the signal command from TclX to do it:
package require Tclx
signal error SIGINT; # Generate a normal Tcl error on signal
signal trap SIGINT {; # Custom signal handler
puts "SIGNALLED!"
exit
}
signal default SIGINT; # Restore default behaviour
If you use signal error SIGINT, the error generated will have this message “SIGINT signal received” and this error code “POSIX SIG SIGINT”. This is easy to test for (especially with Tcl 8.6's try … trap … command).
I was successfully able to redirect the standard output of a script called by my GUI (tcl/tk) using:
exec [info nameofexecutable] jtag.tcl >#$file_id
Here's a description of my system.
Now I want to be able to tell jtag.tcl to stop data acquisition (which is in an infinite loop) when I click "stop" button. Is it possible through exec or should I use open instead?
The exec command waits until the subprocess finishes before returning control to you at all (unless you run totally disconnected in the background). To maintain control you need to open a pipeline:
# Not a read pipe since output is redirected
set pipe [open |[list [info nameofexecutable] jtag.tcl >#$file_id] "w"]
You also need to ensure that the other process listens for when the pipe is closed or have some other protocol for telling the other end to finish. The easiest mechanism to do that is for the remote end to put the pipe (which is its stdin) into non-blocking mode and to check periodically for a quit message.
# Putting the pipe into nonblocking mode
fconfigure stdin -blocking 0
# Testing for a quit message; put this in somewhere it can be called periodically
if {[gets stdin] eq "quit"} {
exit
}
Then the shutdown protocol for the subprocess becomes this in the parent process:
puts $pipe "quit"
close $pipe
Alternatively, kill the subprocess and pick up the results:
exec kill [pid $pipe]
# Need the catch; this will throw an error otherwise because of the signal
catch {close $pipe}
I have a situation where only root can mailx, and only ops can restart the process. I want to make an automated script that both restarts the process and sends an email about doing so.
When I try this using a function the function is "not found".
I had something like:
#!/usr/bin/bash
function restartprocess {
/usr/bin/processcontrol.sh start
}
export -f restartprocess
su - ops -c "restartprocess"
mailx -s "process restarted" myemail.mydomain.com < emailmessage.txt
exit 0
It told me that the function was not found. After some troubleshooting, it turned out that the ops user's default shell is ksh.
I tried changing the script to run in ksh, and changing "export -f" to "typeset -xf", and still the function was not found. Like:
ksh: exportfunction not found
I finally gave up and just called the script (that was in the function directly) and that worked. It was like:
su - ops -c "/usr/bin/processcontrol.sh start"
(This is all of course a simplification of the real script).
Given that user ops has default shell is ksh and I can't change that or modify sudoers, is there a way to export a function such that I can su as ops (and I need to run ops's profile) and execute that function?
I made sure ops user had permission to the directory of the script I wanted it to execute, and permission to run that script.
Any education about this would be appreciated!
There are many restrictions for exporting functions, especially
combined with su - ... with different accounts and different shells.
Instead, turn your script inside out and put all of the command
that is to be run inside a function in the calling shell.
Something like: (Both bash and ksh)
#!/usr/bin/bash
function restartprocess {
/bin/su - ops -c "/usr/bin/processcontrol.sh start"
}
if restartprocess; then
mailx -s "process restarted" \
myemail#mydomain.com < emailmessage.txt
fi
exit 0
This will hide all of the /bin/su processing inside the restartprocess function, and can be expanded at will.
I have a small shell application that embeds Tcl to execute some set of Tcl code. The Tcl interpreter is initialized using Tcl_CreateInterp. Everything is very simple:
user types Tcl command
the command gets passed to Tcl_Eval for evaluation
repeat
But if a user types 'exit', which is a valid Tcl command, the whole thing - Tcl interpreter and my shell application - exit automatically.
Q: is there any way I can catch this exit signal coming from Tcl interpreter. I really would like not to check every user command. I tried Tcl_CreateExitHandler, but it didn't work.
Thanks so much.
Get rid of the command
rename exit ""
Or redefine it to let the user know it's disabled:
proc exit {args} { error "The exit command is not available in this context" }
Also worth considering is running the user's code in a safe interp instead of in the main shell. Doing so would allow you to control exactly what the user has access to.
You might also be able to create a child interp (non-safe) and just disable the exit command for that interp.
Lastly, you could just rename exit to something else, if you're only trying to avoid users typing it by mistake:
namespace eval ::hidden {}
rename exit ::hidden::exit
Rename the exit command:
rename exit __exit
proc exit {args} {
puts -nonewline "Do you really want to exit? (y/n) "
flush stdout
gets stdin answer
if {$answer == "y"} {
__exit [lindex $args 0]
}
}
This way, when the user type exit, he/she will execute your custom exit command, in which you can do anything you like.
Using Tcl_CreateExitHandler works fine. The problem was that I added a printf into the handler implementation and the output didn't show up on the terminal. So I thought it hasn't been called. However, by the time this handler is executed there is no stdout any more. Running strace on the application shows that the handler is being executed fine.
Another solution to this problem can be to use atexit and process the exit event there.