I'm attempting to create a playbook that uses the uri module for creating new watcher tasks in ElasticSearch. However I seem to be getting something wrong with the conditional for checking the register payload as I keep running into the an error; "error while evaluating conditional: payload.contents.find("true") != -1"
What is the correct way to evaluate the returned json from the uri module?
Related files on gist
(btw, I'm running this playbook via vagrant doing provisioning so debugging by dumping the variable playbook is next to impossible and the playbook dies before registering anyways.)
How about something like:
changed_when: (payload.content | from_json).created | bool
Looks like since I'm using the EPEL build, the issue was related to Ansible Pull Request #1011 in that the body_type set to json was not correctly identifying the variable as a string value since the body was being passed to uri as a dictionary instead.
The fix for this was to use lookup('template', '...') instead of a variable
Related
I have a CloudFormation template that consists of a Lambda function that reads messages from the SQS Queue.
Lambda function will read the message from the queue and transform it using a JSON template(Which I want it to be injected externally)
I will deploy different stacks for different products and for each product I will provide different JSON templates to be used for transformation.
I have different options but couldn't decide which one is better;
I can write all JSON files under the project and pack them together and pass related JSON name as a parameter to lambda.
I can store JSON files on S3 and pass S3 URL to lambda so I can read on runtime.
I can store JSON files on Dynamo DB and read from there using the same approach with 2
The first one seems like a better approach as I don't need to read from an external file on every lambda execution. But I will need to pack all templates together.
The last two are a more clear approach but require an external call to read JSON for every call.
Another approach could be (I'm not sure if it is possible) to inject a JSON file to Lambda on deploy from S3 bucket or sth. And Lambda function will read it like an environment variable.
As you can see from the cloudformation documentation Lambda environment variables can be only a Map of Strings, so the actual value you can pass to the function as an environment variable must be a String. You could pass your JSON as a string but the problem is that the max size for all environment variables is 4 KB.
If your templates are bigger and you don't want to call S3 or DynamoDB at runtime you could do a workaround like writing a simple shell script that copies the correct template file to the lambda folder before building and deploying the stack. This way the lambda gets deployed in a package with the code and only the desired json template.
I decided to go with S3 setup and also improved efficiency by storing Json on a global variable (after reading the first time). So I read once and use it for the lifetime of the Lambda container.
I'm not sure this is the best solution but works well enough for my scenario.
This command executed in SpringXD shell
http get --target https://webserviceaddress
gives me JSON document.
Is anybody has idea how to create stream with this as source ?
I see way to make just custom module, but maybe I missed somehow simpler solution ...
The http "command" is a convenience command in the XD shell, so that you don't have to use curl or any external command. It just makes a one-off http request to some endpoint (the default address being http://localhost:9000 which happens to be where the http source module would listen -- again, this is for convenience).
If you want to create a stream, then you need a module that is able to make http requests to a remote endpoint. The http-client processor module does just that. It needs to be triggered by some external source, e.g. the trigger module.
See http://docs.spring.io/spring-xd/docs/current-SNAPSHOT/reference/html/#http-client
To use webservice with get method as source I need trigger as source and http-client as following module for example below stream will get content from webservice every 60 seconds and write it to file
stream create --name stream_name --definition "trigger --fixedDelay=60 | http-client --url='''https://webservice.url''' --httpMethod=GET | file" --deploy
I've been using the new commandline for Postman, Newman, and have been attempting to run Collection tests that work fine when I pass them through the packaged app Jetpacks add-on, but do not run properly in the commandline. Although the json Collection file that I am passing does contain the proper header declarations, I don't have any other clues at this point, so I suspect that this may be an HTTP header issue. But I am not sure exactly what is wrong, as I am rather new to using Postman.
The tests that I'm trying to run are on some calls to an ASP.Net web API, very simple server response-checking one-line javascript tests like the ones in this tutorial.
A sample line that I enter into the console:
$ newman -c collectionfile.json -e environmentfile.json -n 5
achieves such a result:
RequestError: [token] terminated. Error: undefined
Any suggestions/help would be appreciated.
I ran into this problem as well and spent quite a few hours trying to figure it out. Eventually I realized that an address such as "www.google.com" will work in the chrome plugin UI, but not in Newman. For it to work in Newman, the address must be "https://www.google.com". Also, make sure that you have your data file (if you are using variables like {{url}}) set up correctly. You can use "-d dataFile.json" to define a data file. More information on that here.
We want to start a process in JBPM6 using the rest API. We need to pass an object as a process variable.
We know how to do it JAXB and the execute call. But we want to do it with JSON and /runtime/{deploymentId}/process/{processDefId}/start
Is it possible? we try and have no success.
I am not sure whether my answer exactly addresses the question. But for someones use in future I place couple of lines here.
If you want to set a process variable when starting a process using the RESTful API, you can do it like this.
If your variable name is myVar just add the value as a URL parameter by appending the phrase "map_" to the parameter name. It means the parameter name should now be map_myVar. For an example see below request.
http://<host>:<port>/jbpm-console/rest/runtime/{deploymentId}/process/{processDefId}/start?map_myVar=myValue
You can confirm whether the value is set by writing below code in a script task.
Object var = kcontext.getVariable("myVar");
System.out.println("myVar : " + var);
See the 17.1.1.3. Map query parameters section of JBPM6 documentation.
After talking to the dev that is responsible for the REST API. I was able to confirm how it works.
The
/runtime/{deploymentId}/process/{processDefId}/start
Is a POST request where all the contents in the payload are ignored. The variables are written as key=value in the GET string.
With deployment id: com.web:work:1.0
With processDefId: work.worload
2 variables: var1 and var2
For example:
/runtime/com.web:work:1.0/process/work.worload/start?var1=a&var2=b
I'm still after trying to understand how to define objects with the remote API.
Meanwhile, I also confirmed that it is impossible to define objects using this way. The only way to define objects is only by using the JaxB. This uses the "/execute" path
Similar to how http://localhost/jenkins/job/job_name/25/api/json would return a JSON object with the details of build 26, is there a way to get a similar object when first initiating the job, i.e., before you know what the build number is?
I noticed the output from a curl post request to the build url returns html that includes a build number; however, I would prefer not to have to parse this in favor of having a JSON object with the build number in it. Currently, I am using:
curl -v --data "param1=value¶m2=value" \
http://localhost/jenkins/job/job_name/buildWithParameters
which initiates the job fine and outputs a bunch of html. Is there a way to start this job and receive a JSON object with the build number?
The nextBuildNumber may not be the correct build number in all cases. If you have triggered two different builds of the same Job, we don't know which one got triggered first. There is a race condition here. Checking the build queue may not give the correct build number either.
If you query http://localhost/jenkins/job/job_name/api/json you can fetch the nextBuildNumber field anytime that will give you the next build number.
When you trigger a build, you can rest assured the build will get exactly this number.
As soon as the build has been triggered, you can get its URL back from:
http://localhost/jenkins/job/job_name/api/json?tree=lastBuild[url]
This will return the running build if there is one, or the latest completed build otherwise. You can then add "/api/json" to that URL to get your build's JSON object.
In my scenario I needed a JSONP data type to go through. What I did is get the raw object of my particular job from Jenkins so that then I can manipulate it as necessary.
Request:
$.ajax({
url: "http://<jenkins server>/job/<job name>/api/json?jsonp=?",
dataType: 'jsonp',
success: success
});
Success call:
var success = function(json) {
console.log('Raw JSON object for this job:');
console.log(json);
}
Then, get the info you need, such as:
console.log(json.lastCompletedBuild.number);
console.log(json.lastBuild.url);