Complex stdout check in Ansible - output

I run a job on a remote server with Ansible. The stdout generates some output where sometimes errors show up. The error text is in the form of
#ERROR FMM0129E The following error was returned by the vSphere(TM) API: 'Cannot complete login due to an incorrect user name or password.'.
The thing is that some of these errors can safely be ignored and only these that are not in my false positive list should raise a fail.
My question is, can this be done in a pure Ansible way?
The only thing that comes to mind is the simple failed_when check which, in this case, falls short. I am thinking that these "complex" output checking should be done out of Ansible, invoking a python / shell / etc. script to help.

If you are remotely executing a shell command anyway then there's no reason why you couldn't wrap that in a shell script that returns a non 0 status code for the things you care about and then simply execute that via the script module.
example.sh
#!/bin/bash
randomInt=$[ 1 + $[ RANDOM % 10 ]]
echo $randomInt
if [ $randomInt == 1 ]; then
exit 1
else
exit 0
fi
And then use it like this in your playbook:
- name: run example.sh
script: example.sh
Ansible will automatically see any non 0 return codes as the task failing.

Instead of failed_when you could use ignore_errors: true which would get you into the position of passing the failing task and forwarding the stdout to another task. But I would not recommend this, since in my opinion a task should never ever report a failed state by intend. But if you feel this is an option for you, there even would be a way to reset the error counter so the Ansible stats at the end are correct.
- some: task
register: some_result
ignore_errors: true
- name: Reset errors after intentional fail
meta: clear_host_errors
when: some_result | failed
- another: task
check: "{{ some_result.stdout }}
when: some_result | failed
The last task then would check your stdout in a custom script or whatever you have and should report a failed state itself (return code != 0).
As far as I know the clear_host_errors feature is yet undocumented and the commit is about a month old, so I guess it will only be available in Ansible 2.0.1.
Another idea would be to wrap your task inside the script which checks the output or pipe it to that script. That obviously will only work if you run a shell command and not with any other ansible modules.
Other than those two options I don't think there is anything else available.

Related

check file for corruption and fallback to golden image if necessary

How can I check in the grub.cfg file the sha1sum of a file and compare it with a stored number?
If it is equal the image can loaded, if not it should switch back to the golden image
I tried following
myLinuxBin='(hd0,msdos2)/bzImage.bin'
myLinuxBinSha1Sum='d15e1a64c0f5dd24052f0cb38b88c9f5d4c30a6c'
if [ "$(sha1sum ${myLinuxBin})" -eq "${myLinuxBinSha1Sum} ${myLinuxBin}" ]; then
set default="myRunImage"
else
set default="myGoldenImage"
fi
But I get the error message
error: syntax error.
error: Incorrect command.
error: syntax error.
Any idea where the error is or how I can handle file check?
Thanks
this might be better if it is moved to the linux/unix forum since it's BASH scripting, and GRUB.
your problem seems primarily BASH syntax scripting.
it looks like starting with your "$(sha1sum ${myLinuxBin})" is where you want to execute the program that will return the SHA1 hash of whatever you tell it. I believe your syntax here is wrong.
And it may be easier to dump the resulting hash value into a variable, then do a simple BASH if statement such as if [ $hash_value -e $myLinuxBinSha1Sum ]
You would need the correct BASH syntax for executing the sha1sum executable and dumping the output string into a bash variable named hash_value

Informatica call workflow twice with different param files - only last one taken for both

I am trying to call workflow 'child' twice in sequence from workflow 'mother', using batch command pmcmd -paramfile, with two different parameter files.
So essentially, workflow 'mother' consists of two program tasks in sequence, each of them calling a child workflow, with its own paramater file.
command task 1:
pmcmd startworkflow -sv $PMCMD_INTSERVICE -d $PMCMD_DOMAIN -uv PMCMD_USER -pv PMCMD_PSWD -f CDWH -paramfile $PMRootDir/BWParam/parfile1.parm -wait wf_generic
command task 2:
pmcmd startworkflow -sv $PMCMD_INTSERVICE -d $PMCMD_DOMAIN -uv PMCMD_USER -pv PMCMD_PSWD -f CDWH -paramfile $PMRootDir/BWParam/parfile2.parm -wait wf_generic
However, the behaviour that we are seeing is that both 'child' workflows are started with parfile2.parm (obtained from log info).
If I update the filename in the last pmcmd command, the parameter file is updated for both.
Is there any way to fix this?
thanks
PS informatica workflow manager 9.6.1 hf3.
A solution that I found is to keep just using the same param file, but to update the variable needed in a preceding command step:
sed -i 's/WF_INTERFACE=.*/WF_INTERFACE=NXN2WDWH_QSCRAD_/' $PMRootDir/BWParam/global_params.parm
edit:
frustratingly, it stopped working somehow - only the value placed in the second task is used for both, I have no idea why.

GNU make call function with multiple arguments and multiple commands

I am trying to write a GNU make call function (example below) which has multiple shell commands to execute, such that it can be called with different arguments.
shell_commands = $(shell echo $(1); ls -ltr $(2))
try:
$(call shell_commands,$(FILE1),$(FILE2))
1) Is above the correct way to write a call function with multiple commands? By using a semi-colon to separate them? To make it readable, I write my targets as shown below. Is there a similar way to write a call function?
shell_commands:
echo $(1)
ls -ltr $(2)
2) I get this error from make when I execute make -B try. It looks like it is trying to execute /home/user/file1. But why?
make: execvp: /home/user/file1: Permission denied
make: *** [try] Error 127
3) Is it possible to pass variable number of parameters to a call function? Like pass in just the second parameter and not the first one.
$(call shell_commands,,$(FILE2))
I tried google-ing, searching on SO, and looking on gnu.org but I did not get any solutions. Will appreciate any answers or pointers to any resources which document call function with multiple optional arguments and commands.
Question 1: No, this is not right. The shell make function should NEVER be used inside a recipe: the recipe is already running in the shell, so why would you run another shell? It's just confusing. Second, it's generally recommended to use && between multiple commands in a recipe, so that if the first command fails the entire command will immediately fail, rather than continuing on and perhaps succeeding. Of course, that is not always correct either, it depends on what you're trying to do.
Question 2: This happens because the shell make function is like backticks in the shell: it expands to the output printed by the shell command it runs. Your shell command that make runs is:
echo $(1); ls -ltr $(2)
(where, one assumes, $1 expands to /home/user/file1) which prints the string /home/user/file1. After the expansion, that string is substituted into the recipe and make tries to run that recipe, giving the error you see above.
You want this, most likely:
shell_commands = echo $(1) && ls -ltr $(2)
try:
$(call shell_commands,$(FILE1),$(FILE2))
Now the call expands to the actual text, not an invocation of make's shell function, and then that text is run as the recipe.
Question 3: Sure, just using empty parameters means that the variable $1 (in this case) expands to the empty string.

Korn Shell 93: Function Sends Non-null Value, But Calling Function Gets Null Value

Readers:
I’ve spent a few days investigating the following incidents without successfully identifying the cause. I’m writing in regard to ksh scripts I wrote to the ksh88 standards which have run for years on many HP-UX/PA-RISC and Solaris/Sparc platforms, and, even a few Linux/x86_64) platforms … until this week. Upon running the scripts on CentOS 6.4/x86-x64 with Korn shell “Version AJM 93u+ 2012-08-01”, non-null values being returned to the Caller by some functions are retrieved by the Caller as null values.
Specifically, in the edited excerpts following, the variable ToDo always contains a value in fSendReqToSvr prior to fSendReqToSvr returning. When fSendReqToSvr returns in fGetFileStatusFromSvr, Todo is assigned a null value. The context of this script is as a child invoked by another ksh script run from cron. I’ve included the code reassigning stdout and stderr on the chance this is somehow significant.
What don’t I understand?
OS:
CentOS-6.4 (x86-64) Development Installation
Korn Shell:
Version: AJM 93u+ 2012-08-01
Package: Ksh.x86_64 20120801-10.el6
...
function fLogOpen
{
...
exec 3>$1 #C# Assigned Fd 3 to a log file
#C# stdout and stderr are redirected to log file as insurance that
#C# no “errant” output from script (1700 lines) “escapes” from script.
#C# stdout and stderr restored in fLogClose.
exec 4>&1
exec 1>&3
exec 5>&2
exec 2>&3
...
}
...
#C# Invokes curl on behalf of caller and evaluates
function fSendReqToSvr
{
typeset Err=0 ... \
ToDo=CONTINUE ... \
CL=”$2” ...
...
curl $CL > $CurlOutFFS 2>&1 &
gCurlPId=$!
while (( iSecsLeft > 0 )) ; do
...
#C# Sleep N secs, check status of curl with “kill -0 $gCurlPId”
#C# and if curl exited, get return code from “wait $gCurlPId”.
...
done
...
#C# Evaluate curl return code and contents of CurlOutFFS file to
#C# determine what to set ToDo to.
...
print –n -– “$ToDo” #C# ToDo confirmed to always have a value here
return $Err
}
...
function fGetFileStatusFromSvr
{
typeset Err=0 ... \
ToDo=CONTINUE ... \
...
...
ToDo=$( fSendReqToSvr “$iSessMaxSecs” “$CurlCmdLine” )
Err=$?
#C# ToDo contains null here
...
return $Err
}
One problem here is that we don't see the code responsible for the ToDo result.
If this worked properly with ksh88 before, you may have a problem if you don't have good tests for the individual functions, as ksh88 and ksh93 have many subtle and not so subtle differences.
Paradoxically, ksh93 is easier to drop in as a replacement for /bin/sh (The mythical Bourne shell :-) than for ksh88.
The reason for this is that ksh88 introduced extensions to the shell that were further enhanced and changed in ksh93.
One example that may touch on your question is arithmetic, which is limited to integer arithmetic in ksh88 and got extended to floating point in ksh93.
Any utility expecting integer values can be fed results from arithmetic expressions in ksh88.
It may choke on the floating point results returned in ksh93.
Please supply a proper code sample that shows how the ToDo value is determined.

In Tcl, what is the equivalent of "set -e" in bash?

Is there a convenient way to specify in a Tcl script to immediately exit in case any error happens? Anything similar to set -e in bash?
EDIT I'm using a software that implements Tcl as its scripting language. If for example I run the package parseSomeFile fname, if the file fname does't exist, it reports it but the script execution continues. Is there a way that I stop the script there?
It's usually not needed; a command fails by throwing an error which makes the script exit with an informative message if not caught (well, depending on the host program: that's tclsh's behavior). Still, if you need to really exit immediately, you can hurry things along by putting a trace on the global variable that collects error traces:
trace add variable ::errorInfo write {puts stderr $::errorInfo;exit 1;list}
(The list at the end just traps the trace arguments so that they get ignored.)
Doing this is not recommended. Existing Tcl code, including all packages you might be using, assumes that it can catch errors and do something to handle them.
In Tcl, if you run into an error, the script will exit immediately unless you catch it. That means you don't need to specify the like of set -e.
Update
Ideally, parseSomeFile should have return an error, but looks like it does not. If you have control over it, fix it to return an error:
proc parseSomeFile {filename} {
if {![file exists $filename]} {
return -code error "ERROR: $filename does not exists"
}
# Do the parsing
return 1
}
# Demo 1: parse existing file
parseSomeFile foo
# Demo 2: parse non-existing file
parseSomeFile bar
The second option is to check for file existence before calling parseSomeFile.