How to override property "Binding Operation" in SoapRequestNode iib? - esb

I am consuming a web service in IBM integration toolkit having multiple operations. How can I change the binding operation property dynamically at run time so that same SoapRequestNode can be used for all operations?

Set the operation in the Local Environment:
SET OutputLocalEnvironment.Destination.SOAP.Request.Operation = 'myOperation';

Yes Setting the Value Runtime can be achieved through your ESQL code before SOAPRequest Node.
Just assign the below field with required Operation as per requirement
SET OutputLocalEnvironment.Destination.SOAP.Request.Operation = 'requiredOperation';

Related

Apache camel equivalent for spring #Transactional(readonly=true)

I am trying to use Java Streams with JPA in Apache Camel Route with MySQL. I have seen few resources and they have mentioned that we need to have the below 3 properties for Streams to work properly with MySQL.
1. Forward-only resultset
2. Read-only statement
3. Fetch-size set to Integer.MIN_VALUE
I have tried setting the 2nd and 3rd property in the JPA Repository using QueryHint.
#QueryHints(value = {
#QueryHint(name = HINT_FETCH_SIZE, value = "" + Integer.MIN_VALUE),
#QueryHint(name = HINT_READONLY, value = "true")
})
Looks like Fetch-size is working, but 2nd property is not working, as I am still facing the below error:
Caused by: java.sql.SQLException: Streaming result set com.mysql.jdbc.RowDataDynamic#54b12700 is still active. No statements may be issued when any streaming result sets are open and in use on a given connection. Ensure that you have called .close() on any active streaming result sets before attempting more queries.
I have made the route Transactional using transacted() which is equivalent to spring #Transactional. But not sure what is the equivalent for #Transactional(readonly=true). Any leads?
from("direct:" + ROUTE_ID)
.routeId(ROUTE_ID)
.log("Start the processing")
.transacted()
.bean(Processor.class)
.to("log:ProcessName?showAll=true");
I have referred this blog as reference.

How can I find out if a SAP MII transaction provides an output property of a given name?

In a SAP MII transaction, I use a Dynamic Transaction Call to call a subtransaction. I would like to check if this transaction provides an output parameter of a given name. (Not if its value exists but if the property itself is available.)
Is there any way to do this apart from blindly linking to the expected property, defining ThrowOnLinkError = true and catching a possible exception?
Sure you can: use Catch blocks and then add Assignment block where you assing local variable: ThrowOnLinkError = true or log it by adding a Tracer block depending on how you want to receive the error.

Atomicy of Add operation in Couchbase (Java SDK)

Using Couchbase server 2.2 with Java SDK 1.4.4.
The documentation of MemcachedClient::add(String key, int exp, Object o) inherited by CouchbaseClient states: "Add an object to the cache (using the default transcoder) iff it does not exist already".
I haven't found any mention of the atomicy of this operation.
Will asynchronous calls keep the initial value of the added key? Or this is a non-atomic wrapper for a get followed by a set?
Thanks.
add (like most Couchbase operations) is atomic - the cluster will (atomically) perform a check to see if the specified key exists and only if it doesn't will it set it to the given value.
If the key does exist you'll get an error back (EEXISTS or the Java native equivalent).

Returning values from InputFormat via the Hadoop Configuration object

Consider a running Hadoop job, in which a custom InputFormat needs to communicate ("return", similarly to a callback) a few simple values to the driver class (i.e., to the class that has launched the job), from within its overriden getSplits() method, using the new mapreduce API (as opposed to mapred).
These values should ideally be returned in-memory (as opposed to saving them to HDFS or to the DistributedCache).
If these values were only numbers, one could be tempted to use Hadoop counters. However, in numerous tests counters do not seem to be available at the getSplits() phase and anyway they are restricted to numbers.
An alternative could be to use the Configuration object of the job, which, as the source code reveals, should be the same object in memory for both the getSplits() and the driver class.
In such a scenario, if the InputFormat wants to "return" a (say) positive long value to the driver class, the code would look something like:
// In the custom InputFormat.
public List<InputSplit> getSplits(JobContext job) throws IOException
{
...
long value = ... // A value >= 0
job.getConfiguration().setLong("value", value);
...
}
// In the Hadoop driver class.
Job job = ... // Get the job to be launched
...
job.submit(); // Start running the job
...
while (!job.isComplete())
{
...
if (job.getConfiguration().getLong("value", -1))
{
...
}
else
{
continue; // Wait for the value to be set by getSplits()
}
...
}
The above works in tests, but is it a "safe" way of communicating values?
Or is there a better approach for such in-memory "callbacks"?
UPDATE
The "in-memory callback" technique may not work in all Hadoop distributions, so, as mentioned above, a safer way is, instead of saving the values to be passed back in the Configuration object, create a custom object, serialize it (e.g., as JSON), saved it (in HDFS or in the distributed cache) and have it read in the driver class. I have also tested this approach and it works as expected.
Using the configuration is a perfectly suitable solution (admittedly for a problem I'm not sure I understand), but once the job has actually been submitted to the Job tracker, you will not be able to amend this value (client side or task side) and expect to see the change on the opposite side of the comms (setting configuration values in a map task for example will not be persisted to the other mappers, nor to the reducers, nor will be visible to the job tracker).
So to communicate information back from within getSplits back to your client polling loop (to see when the job has actually finished defining the input splits) is fine in your example.
What's your greater aim or use case for using this?

How to resolve naming conflicts when multiply instantiating a program in VxWorks

I need to run multiple instances of a C program in VxWorks (VxWorks has a global namespace). The problem is that the C program defines global variables (which are intended for use by a specific instance of that program) which conflict in the global namespace. I would like to make minimal changes to the program in order to make this work. All ideas welcomed!
Regards
By the way ... This isn't a good time to mention that global variables are not best practice!
The easiest thing to do would be to use task Variables (see taskVarLib documentation).
When using task variables, the variable is specific to the task now in context. On a context switch, the current variable is stored and the variable for the new task is loaded.
The caveat is that a task variable can only be a 32-bit number.
Each global variable must also be added independently (via its own call to taskVarAdd?) and it also adds time to the context switch.
Also, you would NOT be able to share the global variable with other tasks.
You can't use task variables with ISRs.
Another Possibility:
If you are using Vxworks 6.x, you can make a Real Time Process application.
This follows a process model (similar to Unix/Windows) where each instance of your program has it's own global memory space, independent of any other instance.
I had to solve this when integrating two third-party libraries from the same vendor. Both libraries used some of the same symbol names, but they were not compatible with each other. Because these were coming from a vendor, we couldn't afford to search & replace. And task variables were not applicable either since (a) the two libs might be called from the same task and (b) some of the dupe symbols were functions.
Assume we have app1 and app2, linked, respectively, to lib1 and lib2. Both libs define the same symbols so must be hidden from each other.
Fortunately (if you're using GNU tools) objcopy allows you to change the type of a variable after linking.
Here's a sketch of the solution, you'll have to modify it for your needs.
First, perform a partial link for app1 to bind it to lib1. Here, I'm assuming that you've already partially linked *.o in app1 into app1_tmp1.o.
$(LD_PARTIAL) $(LDFLAGS) -Wl,-i -o app1_tmp2.o app1_tmp1.o $(APP1_LIBS)
Then, hide all of the symbols from lib1 in the tmp2 object you just created to generate the "real" object for app1.
objcopymips `nmmips $(APP1_LIBS) | grep ' [DRT] ' | sed -e's/^[0-9A-Fa-f]* [DRT] /-L /'` app1_tmp2.o app1.o
Repeat this for app2. Now you have app1.o and app2.o ready to link into your final application without any conflicts.
The drawback of this solution is that you don't have access to any of these symbols from the host shell. To get around this, you can temporarily turn off the symbol hiding for one or the other of the libraries for debugging.
Another possible solution would be to put your application's global variables in a static structure. For example:
From:
int global1;
int global2;
int someApp()
{
global2 = global1 + 3;
...
}
TO:
typedef struct appGlobStruct {
int global1;
int global2;
} appGlob;
int someApp()
{
appGlob.global2 = appGlob.global1 + 3;
}
This simply turns into a search & replace in your application code. No change to the structure of the code.