ignoring the error output of a script ran from tcl - tcl

I am using an outside script from tcl. The script gives its result as a print out to stdout, so I use the command
set scriptRes [exec ${dir}/bin/script $obm_file]
$obm_file is an arguments for the script, the name of the input file for it.
In some cases the input file is not perfect, so the script will give good output and then will give an error, it prints the error message to stderr. Is there a way to tell tcl to take only the "good" output, i.e. the output that went to stdout, and disregard the error message?

The -ignorestderr option is what you need:
set scriptRes [exec -ignorestderr ${dir}/bin/script $obm_file]
Failing that (e.g., if your Tcl version is too old) use:
set scriptRes [exec ${dir}/bin/script $obm_file 2> /dev/null]

Related

Separating stdout and stderr in Tcl drops all but last line of stderr

I'm not a Tcl programmer, but I need to modify a Tcl script that invokes an external command and tries to separate stdout and stderr. The following is a minimal example of how the script currently does this.
#!/usr/bin/tclsh8.4
set pipe [open "|cmd" r]
while {[gets $pipe line] >= 0} {puts $line}
catch "close $pipe" errorMsg
puts "$errorMsg"
Here, cmd is a an external command, and for the sake of this example, I will replace it with the following shell script. (I'm working on a Linux machine, but you can modify this to write to stdout and stderr however is appropriate for your system.)
#!/bin/sh -f
echo "A" > /dev/stdout
echo "B" > /dev/stdout
echo "C" > /dev/stderr
echo "D" > /dev/stderr
When I execute cmd, I get the following four lines as expected:
% ./cmd
A
B
C
D
However, when I execute my Tcl script, I get:
% ./test.tcl
A
B
D
This is an example of a more general phenomenon, which is that catch seems to swallow all but the last line of stderr.
To me, the "obvious" way to approach this is to try to mimic what is happening with stdout, which obviously works and prints all lines of the output. However, the current implementation is based on getting a Tcl channel by using open "|cmd", which requires running an external command. I can't figure out how to create a channel without opening an external command, and even if I could figure that out, there are subsequent issues with this approach. (How do I get the output of close into the channel? And if I need to open a new channel to get the output of each channel I am closing, then wouldn't I need an infinite number of channels?)
If anyone has any idea why errorMsg drops the initial lines or another approach that does not suffer from this problem, please let me know.
I know that this will come up, so I will say in advance that switching to Tcl 8.5 is probably not an option for me in the short term, since I do not control the environment in which this script is run.

How to download and then use the file in the same tcl script?

I'm new using Tcl and I have the following script:
proc prepare_xml {pdb_id} {
set filename [exec wget ftp://ftp.ebi.ac.uk/pub/databases/msd/sifts/xml/$pdb_id.xml.gz]
set filename_unzip [exec gunzip "$pdb_id.xml.gz"]
set ready_xml [exec sed -i "/entry /c\<entry>" "$pdb_id.xml"]
return $ready_xml
}
The expected output is the file "filename" uncompress and modified. However, when I execute it the first time, it only downloads the file and it does not uncompress it. If I execute it for a second time, I obtained the expected output and a second copy of the original downloaded file.
Can anyone help me with this? I've tried with after and vwait commands but it doesn't work.
Thank you :)
It's hard to say for sure as you're not describing whether any errors are thrown (that'd be the only reason for the code to not run to completion), but I'd expect something like this to be the right approach:
proc prepare_xml {pdb_id} {
# Double quotes on next line just because of Stack Overflow highlighter
set url "ftp://ftp.ebi.ac.uk/pub/databases/msd/sifts/xml/$pdb_id.xml.gz"
set file $pdb_id.xml
append sedcode {/entry /} "c\\\n" {<entry>}
exec wget -q -O - $url | gunzip -c | sed $sedcode > $file
return $file
}
Firstly, I'm keeping complicated bits in (local) variables to stop the exec line from getting too long. Secondly, I've put all the subprocesses together in the one pipeline. Thirdly, I'm using -q and -O - with wget, and -c with gunzip; look up what they do if you don't understand them. Fourthly, I've put the scriptlet for sed in braces where possible to stop there from being trouble with backslashes, but I've used append and a non-backslashed section to make the pattern because the syntax of c in sed is downright weird (it needs a backslash-newline sequence immediately after on at least some platforms…)
I'd actually use native Tcl code to extract and transform the data if I was doing it for me, but that's a rather larger change.

check file for corruption and fallback to golden image if necessary

How can I check in the grub.cfg file the sha1sum of a file and compare it with a stored number?
If it is equal the image can loaded, if not it should switch back to the golden image
I tried following
myLinuxBin='(hd0,msdos2)/bzImage.bin'
myLinuxBinSha1Sum='d15e1a64c0f5dd24052f0cb38b88c9f5d4c30a6c'
if [ "$(sha1sum ${myLinuxBin})" -eq "${myLinuxBinSha1Sum} ${myLinuxBin}" ]; then
set default="myRunImage"
else
set default="myGoldenImage"
fi
But I get the error message
error: syntax error.
error: Incorrect command.
error: syntax error.
Any idea where the error is or how I can handle file check?
Thanks
this might be better if it is moved to the linux/unix forum since it's BASH scripting, and GRUB.
your problem seems primarily BASH syntax scripting.
it looks like starting with your "$(sha1sum ${myLinuxBin})" is where you want to execute the program that will return the SHA1 hash of whatever you tell it. I believe your syntax here is wrong.
And it may be easier to dump the resulting hash value into a variable, then do a simple BASH if statement such as if [ $hash_value -e $myLinuxBinSha1Sum ]
You would need the correct BASH syntax for executing the sha1sum executable and dumping the output string into a bash variable named hash_value

How to keep commands quiet in TCL?

How to execute set command without printing output on the screen? I want to read a file without displaying the contents on the screen.
set a [open "giri.txt" r]
set b [read $ifile]
What you're observing is just the standard behaviour of an interactive Tcl shell: each Tcl command returns a result value, and a return code. If the Tcl shell is interactive (that is, its input and output streams are connected to a terminal), after executing each command, the string representation of the result value the command returned is printed, and then the prompt is shown again. If the shell is not interactive, no results are printed and no prompt is shown.
(On a side note, such behaviour is ubiquitous with interpreters — various Unix shells, Python and Ruby interpreters do the same thing.)
If you want to inhibit such printouts in an interactive session (comes in handy from time to time), a simple hack to achieve that is to chain the command you want to "silence" with a "silent" command (producing a value whose string representation is an empty string), for instance:
set a [open "giri.txt" r]; list
Here, the list returned by the list command having no arguments is an empty list whose string representation is an empty string. In an interactive shell, this chain of commands will output literally nothing.
It bears repeating that such a hack might only ever be needed in an interactive session — do not use it in scripts.
In Mentor ModelSim Tcl it is possible to do:
quietly set answer 42
Also in Mentor Questa:
help quietly
The quietly command turns off transcript echoing for the specified command.
You can turn this off in an interactive tclsh
set tcl_interactive false
but that will also turn off the prompt.

In Tcl, what is the equivalent of "set -e" in bash?

Is there a convenient way to specify in a Tcl script to immediately exit in case any error happens? Anything similar to set -e in bash?
EDIT I'm using a software that implements Tcl as its scripting language. If for example I run the package parseSomeFile fname, if the file fname does't exist, it reports it but the script execution continues. Is there a way that I stop the script there?
It's usually not needed; a command fails by throwing an error which makes the script exit with an informative message if not caught (well, depending on the host program: that's tclsh's behavior). Still, if you need to really exit immediately, you can hurry things along by putting a trace on the global variable that collects error traces:
trace add variable ::errorInfo write {puts stderr $::errorInfo;exit 1;list}
(The list at the end just traps the trace arguments so that they get ignored.)
Doing this is not recommended. Existing Tcl code, including all packages you might be using, assumes that it can catch errors and do something to handle them.
In Tcl, if you run into an error, the script will exit immediately unless you catch it. That means you don't need to specify the like of set -e.
Update
Ideally, parseSomeFile should have return an error, but looks like it does not. If you have control over it, fix it to return an error:
proc parseSomeFile {filename} {
if {![file exists $filename]} {
return -code error "ERROR: $filename does not exists"
}
# Do the parsing
return 1
}
# Demo 1: parse existing file
parseSomeFile foo
# Demo 2: parse non-existing file
parseSomeFile bar
The second option is to check for file existence before calling parseSomeFile.