Informatica call workflow twice with different param files - only last one taken for both - parameter-passing

I am trying to call workflow 'child' twice in sequence from workflow 'mother', using batch command pmcmd -paramfile, with two different parameter files.
So essentially, workflow 'mother' consists of two program tasks in sequence, each of them calling a child workflow, with its own paramater file.
command task 1:
pmcmd startworkflow -sv $PMCMD_INTSERVICE -d $PMCMD_DOMAIN -uv PMCMD_USER -pv PMCMD_PSWD -f CDWH -paramfile $PMRootDir/BWParam/parfile1.parm -wait wf_generic
command task 2:
pmcmd startworkflow -sv $PMCMD_INTSERVICE -d $PMCMD_DOMAIN -uv PMCMD_USER -pv PMCMD_PSWD -f CDWH -paramfile $PMRootDir/BWParam/parfile2.parm -wait wf_generic
However, the behaviour that we are seeing is that both 'child' workflows are started with parfile2.parm (obtained from log info).
If I update the filename in the last pmcmd command, the parameter file is updated for both.
Is there any way to fix this?
thanks
PS informatica workflow manager 9.6.1 hf3.

A solution that I found is to keep just using the same param file, but to update the variable needed in a preceding command step:
sed -i 's/WF_INTERFACE=.*/WF_INTERFACE=NXN2WDWH_QSCRAD_/' $PMRootDir/BWParam/global_params.parm
edit:
frustratingly, it stopped working somehow - only the value placed in the second task is used for both, I have no idea why.

Related

Output last command

I am using a function that I found in YADR which should insert the output of the last command.
# Use Ctrl-x,Ctrl-l to get the output of the last command
zmodload -i zsh/parameter
insert-last-command-output() {
LBUFFER+="$(eval $history[$((HISTCMD-1))])"
}
zle -N insert-last-command-output
bindkey "^X^L" insert-last-command-output
For some reason, it does not seem to work by pressing ctrl-x ctrl-l but running
echo $(eval $history[$((HISTCMD-1))])
command on the terminal does produce the output of the last command.
Running bindkey -M viins shows "^X^L" insert-last-command-output
as one of the entries. Therefore, the function is registered.
I don't really understand how the function works. I think that the variable LBUFFER holds the output of all last commands but when I echo $LBUFFER, it returns the function code.
Can anyone help me get this working?
I finally found a solution.
I had been trying to use the shortcut inside tmux which did not work. However, outside tmux, everything worked. It turns out that tmux will not allow a shortcut with two keys. I changed the shortcut to just alt-L and everything works.

I am using identical syntax in jq to change JSON values, yet one case works while other turns bash interactive, how can I fix this?

I am trying to update a simple JSON file (consists of one object with several key/value pairs) and I am using the same command yet getting different results (sometimes even having the whole json wiped with the 2nd command). The command I am trying is:
cat ~/Desktop/config.json | jq '.Option = "klay 10"' | tee ~/Desktop/config.json
This command perfectly replaces the value of the minerOptions key with "klay 10", my intended output.
Then, I try to run the same process on the newly updated file (just value is changed for that one key) and only get interactive terminal with no result. ps unfortunately isn't helpful in showing what's going on. This is what I do after getting that first command to perfectly change the value of the key:
cat ~/Desktop/config.json | jq ‘.othOptions = "-epool etc-eu1.nanopool.org:14324 -ewal 0xc63c1e59c54ca935bd491ac68fe9a7f1139bdbc0 -mode 1"' | tee ~/Desktop/config.json
which I would have expected would replace the othOptions key value with the assigned result, just as the last did. I tried directly sending the stdout to the file, but no result there either. I even tried piping one more time and creating a temp file and then moving it to change to original, all of these, as opposed to the same identical command, just return > and absolutely zero output; when I quit the process, it is the same value as before, not the new one.
What am I missing here that is causing the same command with just different inputs (the key in second comes right after first and has identical structure, it's not creating an object or anything, just key-val pair like first. I thought it could be tee but any other implementation like a passing of stdout to file produces the same constant > waiting for a command, no user.
I genuinely looked everywhere I could online for why this could be happening before resorting to SE, it's giving me such a headache for what I thought should be simple.
As #GordonDavisson pointed out, using tee to overwrite the input file is a (well-known - see e.g. the jq FAQ) recipe for disaster. If you absolutely positively want to overwrite the file unconditionally, then you might want to consider using sponge, as in
jq ... config.json | sponge config.json
or more safely:
cp -p config.json config.json.bak && jq ... config.json | sponge config.json
For further details about this and other options, search for ‘sponge’ in the FAQ.

Complex stdout check in Ansible

I run a job on a remote server with Ansible. The stdout generates some output where sometimes errors show up. The error text is in the form of
#ERROR FMM0129E The following error was returned by the vSphere(TM) API: 'Cannot complete login due to an incorrect user name or password.'.
The thing is that some of these errors can safely be ignored and only these that are not in my false positive list should raise a fail.
My question is, can this be done in a pure Ansible way?
The only thing that comes to mind is the simple failed_when check which, in this case, falls short. I am thinking that these "complex" output checking should be done out of Ansible, invoking a python / shell / etc. script to help.
If you are remotely executing a shell command anyway then there's no reason why you couldn't wrap that in a shell script that returns a non 0 status code for the things you care about and then simply execute that via the script module.
example.sh
#!/bin/bash
randomInt=$[ 1 + $[ RANDOM % 10 ]]
echo $randomInt
if [ $randomInt == 1 ]; then
exit 1
else
exit 0
fi
And then use it like this in your playbook:
- name: run example.sh
script: example.sh
Ansible will automatically see any non 0 return codes as the task failing.
Instead of failed_when you could use ignore_errors: true which would get you into the position of passing the failing task and forwarding the stdout to another task. But I would not recommend this, since in my opinion a task should never ever report a failed state by intend. But if you feel this is an option for you, there even would be a way to reset the error counter so the Ansible stats at the end are correct.
- some: task
register: some_result
ignore_errors: true
- name: Reset errors after intentional fail
meta: clear_host_errors
when: some_result | failed
- another: task
check: "{{ some_result.stdout }}
when: some_result | failed
The last task then would check your stdout in a custom script or whatever you have and should report a failed state itself (return code != 0).
As far as I know the clear_host_errors feature is yet undocumented and the commit is about a month old, so I guess it will only be available in Ansible 2.0.1.
Another idea would be to wrap your task inside the script which checks the output or pipe it to that script. That obviously will only work if you run a shell command and not with any other ansible modules.
Other than those two options I don't think there is anything else available.

GNU make call function with multiple arguments and multiple commands

I am trying to write a GNU make call function (example below) which has multiple shell commands to execute, such that it can be called with different arguments.
shell_commands = $(shell echo $(1); ls -ltr $(2))
try:
$(call shell_commands,$(FILE1),$(FILE2))
1) Is above the correct way to write a call function with multiple commands? By using a semi-colon to separate them? To make it readable, I write my targets as shown below. Is there a similar way to write a call function?
shell_commands:
echo $(1)
ls -ltr $(2)
2) I get this error from make when I execute make -B try. It looks like it is trying to execute /home/user/file1. But why?
make: execvp: /home/user/file1: Permission denied
make: *** [try] Error 127
3) Is it possible to pass variable number of parameters to a call function? Like pass in just the second parameter and not the first one.
$(call shell_commands,,$(FILE2))
I tried google-ing, searching on SO, and looking on gnu.org but I did not get any solutions. Will appreciate any answers or pointers to any resources which document call function with multiple optional arguments and commands.
Question 1: No, this is not right. The shell make function should NEVER be used inside a recipe: the recipe is already running in the shell, so why would you run another shell? It's just confusing. Second, it's generally recommended to use && between multiple commands in a recipe, so that if the first command fails the entire command will immediately fail, rather than continuing on and perhaps succeeding. Of course, that is not always correct either, it depends on what you're trying to do.
Question 2: This happens because the shell make function is like backticks in the shell: it expands to the output printed by the shell command it runs. Your shell command that make runs is:
echo $(1); ls -ltr $(2)
(where, one assumes, $1 expands to /home/user/file1) which prints the string /home/user/file1. After the expansion, that string is substituted into the recipe and make tries to run that recipe, giving the error you see above.
You want this, most likely:
shell_commands = echo $(1) && ls -ltr $(2)
try:
$(call shell_commands,$(FILE1),$(FILE2))
Now the call expands to the actual text, not an invocation of make's shell function, and then that text is run as the recipe.
Question 3: Sure, just using empty parameters means that the variable $1 (in this case) expands to the empty string.

How can I wrap qrsh?

I'm trying to write a wrapper for qrsh, the Oracle Grid Engine equivalent to rsh, and am having trouble identifying the command given to it. Consider the following example:
qrsh -cwd -V -now n -b y -N cvs -verbose -q some.q -p -98 cvs -Q log -N -S -d2012-04-09 14:02:08 GMT<2012-04-11 21:53:41 GMT -b
The command in this case starts from cvs. My wrapper needs to be general purpose so I can't look specifically for cvs. Any ideas on how to identify it? One thought is to look for executable commands starting from the end backwards, which will work in this case but won't be robust as "cvs" could appear in an option to itself. The only robust option that I can come up with is to fully implement the qrsh option parser but I'm not thrilled about it since it will need to be updated with qrsh updates and is complicated.
One option is to set QRSH_WRAPPER to echo and run qrsh once. However, this then requires two jobs to be issued instead of one, adding latency and wasting a slot.