How to extract workflow action name in oozie - hadoop2

I have many java actions. I have to extract the action names.
Is there any EL function to extract action name?

Related

How to GET only custom Field Names from Azure Form Recognizer API?

I trained a Custom Form Recognizer Model. It tests great. But I can't find the API endpoint to call that returns ONLY the key/value pairs for the form I sent the model to analyze.
Example:
I trained a custom model to find First name and Last name only
When I POST a PDF to the endpoint:
https://{my-project}.cognitiveservices.azure.com/formrecognizer/documentModels/{my-model}:analyze?api-version=2022-08-31
Then view JSON from the Operation-Location header:
https://form-rec-ocr-test.cognitiveservices.azure.com/formrecognizer/documentModels/{model-name}/analyzeResults/{GUID}?api-version=2022-08-31
I get aaaaallll the text on the submitted PDF instead of only the First name and Last name fields. I just want the key/value pairs to insert them into a table.
What is the correct API endpoint to call? The API docs here are focused on pre-build models instead of Custom Models.
For some reason, this was not lined out in any documentation I came across. Found the answer in this video #36:30.
The data was in the original JSON object, just at line 3300 under the document node.
Would simplify things if the Form Recognizer API could return ONLY the document array by defining a simple query parameter.

How do I Pass JSON as a parameter to AWS Lambda

I have a CloudFormation template that consists of a Lambda function that reads messages from the SQS Queue.
Lambda function will read the message from the queue and transform it using a JSON template(Which I want it to be injected externally)
I will deploy different stacks for different products and for each product I will provide different JSON templates to be used for transformation.
I have different options but couldn't decide which one is better;
I can write all JSON files under the project and pack them together and pass related JSON name as a parameter to lambda.
I can store JSON files on S3 and pass S3 URL to lambda so I can read on runtime.
I can store JSON files on Dynamo DB and read from there using the same approach with 2
The first one seems like a better approach as I don't need to read from an external file on every lambda execution. But I will need to pack all templates together.
The last two are a more clear approach but require an external call to read JSON for every call.
Another approach could be (I'm not sure if it is possible) to inject a JSON file to Lambda on deploy from S3 bucket or sth. And Lambda function will read it like an environment variable.
As you can see from the cloudformation documentation Lambda environment variables can be only a Map of Strings, so the actual value you can pass to the function as an environment variable must be a String. You could pass your JSON as a string but the problem is that the max size for all environment variables is 4 KB.
If your templates are bigger and you don't want to call S3 or DynamoDB at runtime you could do a workaround like writing a simple shell script that copies the correct template file to the lambda folder before building and deploying the stack. This way the lambda gets deployed in a package with the code and only the desired json template.
I decided to go with S3 setup and also improved efficiency by storing Json on a global variable (after reading the first time). So I read once and use it for the lifetime of the Lambda container.
I'm not sure this is the best solution but works well enough for my scenario.

Dynamic searchParameters in http request of jmeter

IndexFile.csv is like
type,text,code,,,FileOne.csv
req,,,,,FileTwo.csv
And soo on which means dynamic number of params for http request
FileOne.csv is like
44-3ef-k23,string,http://someThing:port/Something|something
string,"string,string",1234
So i need http encoding i.e utf-8 while reading file.
Can someone help me how to do this
One thing is obvious: you should not be using __CSVRead() function inside the JSR223 script.
According to JSR223 Sampler documentation:
JMeter processes function and variable references before passing the script field to the interpreter, so the references will only be resolved once. Variable and function references in script files will be passed verbatim to the interpreter, which is likely to cause a syntax error. In order to use runtime variables, please use the appropriate props methods, e.g.
props.get("START.HMS");
props.put("PROP1","1234");
So I would suggest to use File.readLines() function in order to read your CSV file(s) into memory, once done you should be able to call split() function to split each line by comma and do what you need with the results.
Check out Working with Files chapter of The Groovy Templates Cheat Sheet for JMeter article for more information and examples

How to find service name in webmethods which are used in a transformer

Am trying to print the names of services invoked in a FlowService using a java program. Am able to print the names of services using below code:
import com.wm.app.b2b.server.InvokeState;
...
...
...
InvokeState invkState = InvokeState.getCurrentState();
//below line prints all service names which are invoked in a flow service
System.out.println(invkState.getCallStack());
However, when am using transformers and trying to invoke a custom service (which I created), the above code doesn't print the services which are invoked via transformer. And if, I happened to use a pub service, then it displays service name but not in the case of custom service.
Here is the image for better understanding.
Any inputs would be highly appreciated.
After I tried mapping my transformers output to the pipeline, my code started displaying service names which was invoked using transformer.
Figured out that transformers are only invoked when they have mapped outputs to the outgoing pipeline of the step.

submitting a parameterized hudson build via the REST API

hudson supports submitting a build by doing an HTTP GET to an API. I need to pass some parameters to such a build. Just adding them as additional URL parameters doesn't work for me. Is this supposed to work? Is there some other mechanism?
Is it possible to pass parameters in the Hudson's job that will be triggered remotely?
Check this question.
Instead of /build use /buildWithParameters. I'm currently using it with a simple wget
Based on the HTML source on the web interface for starting a parameterized build, you need to do a POST to http://hudson/job/NAME/build with the parameters.
Update: It's a little more complicated. There's a hidden input with name "name" and value "MyParameter", then the input you actually fill in with name "value" and value "MyInput". (Where MyParameter if your parameter name and MyInput is whatever you need to fill in.) I haven't checked to see how this works with more than one parameter.
the POST works with just the json url parameter that contains a JSON list of the build parameters: json=%7B%22parameter%22%3A+%5B%7B%22name%22%3A+%22Input%22%2C+%22value%22%3A+%22data1%22%7D%2C+%7B%22name%22%3A+%22Input2%22%2C+%22value%22%3A+%22data2%22%7D%5D%2C+%7D