How to find service name in webmethods which are used in a transformer - webmethods

Am trying to print the names of services invoked in a FlowService using a java program. Am able to print the names of services using below code:
import com.wm.app.b2b.server.InvokeState;
...
...
...
InvokeState invkState = InvokeState.getCurrentState();
//below line prints all service names which are invoked in a flow service
System.out.println(invkState.getCallStack());
However, when am using transformers and trying to invoke a custom service (which I created), the above code doesn't print the services which are invoked via transformer. And if, I happened to use a pub service, then it displays service name but not in the case of custom service.
Here is the image for better understanding.
Any inputs would be highly appreciated.

After I tried mapping my transformers output to the pipeline, my code started displaying service names which was invoked using transformer.
Figured out that transformers are only invoked when they have mapped outputs to the outgoing pipeline of the step.

Related

How do I Pass JSON as a parameter to AWS Lambda

I have a CloudFormation template that consists of a Lambda function that reads messages from the SQS Queue.
Lambda function will read the message from the queue and transform it using a JSON template(Which I want it to be injected externally)
I will deploy different stacks for different products and for each product I will provide different JSON templates to be used for transformation.
I have different options but couldn't decide which one is better;
I can write all JSON files under the project and pack them together and pass related JSON name as a parameter to lambda.
I can store JSON files on S3 and pass S3 URL to lambda so I can read on runtime.
I can store JSON files on Dynamo DB and read from there using the same approach with 2
The first one seems like a better approach as I don't need to read from an external file on every lambda execution. But I will need to pack all templates together.
The last two are a more clear approach but require an external call to read JSON for every call.
Another approach could be (I'm not sure if it is possible) to inject a JSON file to Lambda on deploy from S3 bucket or sth. And Lambda function will read it like an environment variable.
As you can see from the cloudformation documentation Lambda environment variables can be only a Map of Strings, so the actual value you can pass to the function as an environment variable must be a String. You could pass your JSON as a string but the problem is that the max size for all environment variables is 4 KB.
If your templates are bigger and you don't want to call S3 or DynamoDB at runtime you could do a workaround like writing a simple shell script that copies the correct template file to the lambda folder before building and deploying the stack. This way the lambda gets deployed in a package with the code and only the desired json template.
I decided to go with S3 setup and also improved efficiency by storing Json on a global variable (after reading the first time). So I read once and use it for the lifetime of the Lambda container.
I'm not sure this is the best solution but works well enough for my scenario.

Integration Server - save pipeline to database

I'm trying to create service, that writes input and output pipeline(xmldata) every time, that one of installed services is invoked. How my service should know, when other service is invoked? Is there any builtin tool to catch in&out pipeline and save to file or any other source?
You can use one of webMethods' build-in service in WmPublic package:
pub.flow:savePipelineToFile
Note that it's not recommended to use "savePipelineToFile" in production for obvious performance/resources reasons. And of course, to load the pipeline from the file use:
pub.flow:restorePipelineFromFile
The usual work flow to debugging in webMethods is:
Add the "savePipelineToFile" step as the first instruction in
your flow service.
Execute flow service or execute from some client that will trigger the flow service.
Disable the "savePipelineToFile" step in the flow service.
Add the "restorePipelineFromFile" step at the very top in your flow service.
Run webMethods Designer in Debug mode and step through. Once it
goes over the "restorePipelineFromFile" step, you should see the
entire content of the pipeline including the inputs as passed by the
client. If you continue stepping through your flow service step by
step then you should see the what the final output is.
The services "savePipelineToFile" and "restorePipelineFromFile" are very useful to debug your services. The "pipeline" file will be located at:
IntegrationServer\pipeline
If your requirements dictate that you need to produce xml flat files then use the following:
To serialize documents to xml (WmPublic package)
pub.xml:documentToXMLString
To write xml string to file (WmPublic package)
pub.file:stringToFile
Hope it helps!
EDIT:
By default the soap headers are hidden. Most of the time you don't need the soap headers but you can force webMethods to add the soap headers to the pipeline by doing the following.
1) Click on the web service descriptor.
2) In the properties panel, set "Pipeline Headers Enabled" to true.
The soap headers won't appear in your pipeline at design time. It will only exist at runtime. If you want to see the content of the soap header then you will need to use the "savePipelineToFile" / "restorePipelineFromFile" technique that I described previously.
To make the soap header appear at design time then you will have to do implicit mapping to the soap header fields as if they were there.

wso2 API Manager and BAM - How to control API invocation?

How can I retrieve the number of API invocations? I know the data has to be somewhere because wso2 BAM shows piecharts with similar data...
I would like to get that number in a mediation sequencel; is that possible? Might this might be achieved via a DB-lookup?
The way how API Usage Monitoring in WSO2 API Manager works is, there is an API handler (org.wso2.carbon.apimgt.usage.publisher.APIUsageHandler) that gets invoked for each request and response passing through the API gateway. In this handler all pertinent information with regard to API usage is published to the WSO2 BAM server. The WSO2 BAM server persists this data in Cassandra database that is shipped with it. Then there is a BAM Toolbox that has been packaged with required analytic scripts written using Apache Hive that can be installed on the BAM server. These scripts would summarize the data periodically and persist the summarized data to an sql database. So the graphs and charts shown in the API Publisher web application are created using the summarized data from the sql database.
Now, if what you require is extractable from these summarized sql tables then i suppose the process is very straight forward. You could use the DBLookup mediator for this. But if some dimension of the data which you need has been lost due to the summarizing, then you will have a little more work to do.
You have two options.
The easiest approach which involves no coding at all would be to write a custom Hive script that suits your requirement and summarize data to a sql table. Then, like before use a DBLookup mediator to read the data. You can look at the existing Hive scripts that are shipped with the product to get an idea of how it is written.
If you dont want BAM in the picture, you still can do it with minimal coding as follows. The implementation class which performs the publishing is org.wso2.carbon.apimgt.usage.publisher.APIMgtUsageDataBridgeDataPublisher. This class implements the interface org.wso2.carbon.apimgt.usage.publisher.APIMgtUsageDataPublisher. The interface has three instace methods as follows.
public void init()
public void publishEvent(RequestPublisherDTO requestPublisherDTO)
public void publishEvent(ResponsePublisherDTO responsePublisherDTO)
The init() method runs just once during server startup. Here is where you can add all your logic which is needed to bootstrap the class.
The publishEvent(RequestPublisherDTO) is where you publish request events and publishEvent(ResponsePublisherDTO) is where you publish response events. The DTO objects are encapsulated representations of the request and response data respectively.
What you will have to do is write a new implementation for this interface and configure it as the value for DataPublisherImpl property in api-manager.xml. To make things easier you can simply extends the exsiting org.wso2.carbon.apimgt.usage.publisher.APIMgtUsageDataBridgeDataPublisher, write your necessary logic to persist usage data to an sql database within the init(), publishEvent(RequestPublisherDTO) and publishEvent(ResponsePublisherDTO) and at the end of each method just call its respective super class method.
E.g. the overriding init() will call super().init(). This way you are only adding the neccessary code for your requirement, and leaving the BAM stat collection requirement to the super class.

How can I get the json object which represents a Yahoo! pipe

It seems that Yahoo pipes are represented using JSON. I want to download these JSON objects for some research purpose. Usually a Yahoo pipe is rendered in a browser editor thru a url like this: http://pipes.yahoo.com/pipes/pipe.edit?_id=XgRo96h13BGtJWvS8SvLAg, but you can't get the corresponding JSON object to this Yahoo pipe. Does anyone know how to get JSON objects representing Yahoo pipes and store them in any persistent form?
It is possible to get hold of a JSON description of a Yahoo Pipe using a URL of the form:
http://pipes.yahoo.com/pipes/pipe.info?_out=json&_id=PIPE_ID
The pipe2py python library demonstrates how to grab the JSON description of a pipe and "compile" it to a Python equivalent that can be run on your own server.
The post Exporting Yahoo Pipe Definitions, Compiling Them to Python, and Running Them in Scraperwiki describes how you can use pipe2py in the Scraperwiki environment to compile and execute pipes on Scraperwiki using pipe definitions imported directly from Yahoo Pipes, or exported from Yahoo Pipes and then stored locally in a Scraperwiki database table.
When I load that page in a browser I can see that it makes an ajax request for:
http://pipes.yahoo.com/pipes/ajax.pipe.load?id=XgRo96h13BGtJWvS8SvLAg&_out=json&modinfo=true&rnd=7560&.crumb=MjvGjpzhPLl
That's your object but I'm not sure if I'm answering your question of how to "get it". If you need to get it through a program you would need a script that loges into pipes and extracts that url.
A quick way, while not automated, is to use an HTTP analyzer. Here's a process for getting the object using HttpFox (I use v0.8.9) for Firefox. With the analyzer running, load the edit page for a pipe, like the one you linked:
http://pipes.yahoo.com/pipes/pipe.edit?_id=XgRo96h13BGtJWvS8SvLAg
Look at the request with a URL that starts with:
http://pipes.yahoo.com/pipes/ajax.pipe.load?id=....
Next, explore the content of the request (there's a 'Content' tab in HttpFox). That's the JSON object representing the pipe structure.
Use pipe.run?[your pipe id here]&_render=json as opposed to pipe.edit
So in your case to get the json it would be - http://pipes.yahoo.com/pipes/pipe.run?_id=XgRo96h13BGtJWvS8SvLAg&_render=json
I guess how you implement the client is dependent on what you like writing in/what other functionality you need.
You could also do it the other way around and use the web service service module to post the data to a script that can extract the json and persist it to a database. You could check out json.org.

List of Slaves connected to master - Hudson

Is there a way to find it programatically? I need this as part of an automated run; So this would be very helpful if there is an existing remote API call which can give this.
You don't need to parse the HTML - most of the Hudson pages can be turned into API calls by adding URL suffix, e.g. make GET calls to:
http://hudson:8080/computer/api/json
Switch the JSON for either XML or Python if you prefer these over JSON.
If you use just the API suffix, you'll get a short generic help page on the API.
Groovy script to get all computers:
def jenkins = Jenkins.instance
def computers = jenkins.computers
computers.each{
println "${it.displayName} ${it.hostName}"
}
Look at http://hudson:8080/computer/