List of Slaves connected to master - Hudson - hudson

Is there a way to find it programatically? I need this as part of an automated run; So this would be very helpful if there is an existing remote API call which can give this.

You don't need to parse the HTML - most of the Hudson pages can be turned into API calls by adding URL suffix, e.g. make GET calls to:
http://hudson:8080/computer/api/json
Switch the JSON for either XML or Python if you prefer these over JSON.
If you use just the API suffix, you'll get a short generic help page on the API.

Groovy script to get all computers:
def jenkins = Jenkins.instance
def computers = jenkins.computers
computers.each{
println "${it.displayName} ${it.hostName}"
}

Look at http://hudson:8080/computer/

Related

dynamically update the request json and send it as multipart form data in karate [duplicate]

In my karate tests i need to write response id's to txt files (or any other file format such as JSON), was wondering if it has any capability to do this, I haven't seen otherwise in the documentation. In the case of no, is there a simple JavaScript function to do so?
Try the karate.write(value, filename) API but we don't encourage it. Also the file will be written only to the current "build" directory which will be target for Maven projects / stand-alone JAR.
value can be any data-type, and Karate will write the bytes (or plain-text) out. There is no built-in support for any other format.
Here is an example.
EDIT: for others coming across this answer in the future the right thing to do is:
don't write files in the first place, you never need to do this, and this question is typically asked by inexperienced folks who for some reason think that the only way to "save" a response before validation is to write it to a file. No, please don't waste your time - and please just match against the response. You can save it (or parts of it) to variables while you make other HTTP requests. And do not write your tests so that scenarios (or features) depend on other scenarios, this is a very bad practice. Also note that by default, Karate will dump all HTTP requests and responses in the log file (typically in target/karate.log) and also in the HTML report.
see if karate.write() works for you as per this answer
write a custom Java (or JS function that uses the JVM) to do what you want using Java interop
Also note that you can use karate.toCsv() to convert JSON into CSV if needed.
My justification for writing to a file is a different one. I am using karate explicitly to implement a mock. I want to expose an endpoint wherein the upstream system will send some basic data through json payload using POST/PUT method and karate will construct the subsequent payload file and stores it the specific folder, and this newly created payload file will be exposed through another GET call.

How to parse JSON response in a built step in jenkins

In order to industrialize the deployments of an IBM product, I'm going to use its REST API interfaces.
I plan to use jenkins in order to orchestrate the calls to the REST APIs.
I'm still wondering if it's a good idea...?
If so, is there any way to simply parse the JSON responses in order to be able to make some conditions in the steps?
Thanks.
You didn't say what you're using the API for, but if you have the Groovy plugin, you could use JsonSlurper
Something like
import groovy.json.JsonSlurper
URL apiUrl = "https://some.website/api/someFunction".toURL()
List json = new JsonSlurper().parse(apiUrl.newReader())
// do stuff with the json object
I'm not quite sure how you would take this and use it directly for conditional build steps during the execution of the job, though.
An alternative approach is to generate a set of jobs with the appropriate steps based on the API response, using the Job DSL Plugin. This sort of thing can be used for stuff like reading a list of SCM branches and generating jobs for each of them. That may or may not be what you're trying to do.
JQ https://stedolan.github.io/jq/ is a JSON parser for bash. I have used it in the past and its beautiful.
You can download JQ to your Jenkins server, and then call JQ in your build step bash scripts
If you are using Pipeline Job, you will be happy with Pipeline Utility Steps Plugin

How to transform JSON in Azure Logic Apps?

I'm trying to create an Azure Logic App that broadly does the following:
Use a HTTP call to a REST service, the REST service will return JSON with Ids. This is working fine, the resulting JSON looks a bit like this: "workItems" : [ { "id": 118, }, { "id": 119, }, etc ]
I need to extract all the Ids, and put them into a comma separated string, e.g. 118, 119, etc.
The comma separated string will then be used as part of another HTTP REST call.
However I'm struggling at point 2. I cant see where I can write some script or code (without building a custom logic app component) to do this transformation.
At the moment I've tried using the BizTalk Apps to convert the JSON to XML, then use XPath, then hopefully get that back into a string at some point - but this whole process seems overly complicated.
I realise I could write a custom app, but if I did that then I might as well just do all the work in the custom app as well. Be nice to use the native features of Azure if possible.
I'm afraid I might be missing something obvious. Suggestions would be appreciated.
Try the CsScripting Api. It enables you to do some simple c# code and has the NewtonSoft libs available. I usually write the code as a console app for testing first before plugging it into the logic app action.
WebJobs Webhooks are now deprecated. Use Azure Functions Generic Webhooks instead - they have direct integration support with Logic Apps.
One option is to use a WebJob Webhook and do the transformation/filter there. I have an example on GitHub of using this to filter posts to Slack. If you already have a Web/Mobile/API app up and running, it's easy to have a WebJob hosted on it so you don't need additional resources, necessarily.
Your other option, you highlighted. Deploy an API App which will do the xform for you.
If you want to go down the WebJob route and need any help, let me know and I'll be glad to assist.

WSo2 API Manager 1.8.0 - JSON parsing issue

I am new to wso2 API Manager, trying to set it up expose my plain HTTP POST back end calls as a REST API. I am sure this is not a new pattern that I am trying to achieve here. The requirement is to convert the JSON data coming in (has array structures) into the HTTP URL query string parameters.
After some research through the documentation and other posts on this forum, decided to go with the script mediator that would parse the JSON data and convert it to a string that can be appended to the endpoint URL. Some how I am not able to achieve this.
I have followed the following post that seems to be very straight forward. As the original poster suggested, I am also not able to use getPayloadJSON() method. Even have trouble using the work around suggested there due to JSON.parse() not working.
Link
Also, this approach of editing the service bus configuration from source view does not sound like the correct option. Is there another elegant solution to achieve this? Thanks for the help in advance.
I was able to get both methods working by using an external script instead of the inline java script. Thanks

How can I get the json object which represents a Yahoo! pipe

It seems that Yahoo pipes are represented using JSON. I want to download these JSON objects for some research purpose. Usually a Yahoo pipe is rendered in a browser editor thru a url like this: http://pipes.yahoo.com/pipes/pipe.edit?_id=XgRo96h13BGtJWvS8SvLAg, but you can't get the corresponding JSON object to this Yahoo pipe. Does anyone know how to get JSON objects representing Yahoo pipes and store them in any persistent form?
It is possible to get hold of a JSON description of a Yahoo Pipe using a URL of the form:
http://pipes.yahoo.com/pipes/pipe.info?_out=json&_id=PIPE_ID
The pipe2py python library demonstrates how to grab the JSON description of a pipe and "compile" it to a Python equivalent that can be run on your own server.
The post Exporting Yahoo Pipe Definitions, Compiling Them to Python, and Running Them in Scraperwiki describes how you can use pipe2py in the Scraperwiki environment to compile and execute pipes on Scraperwiki using pipe definitions imported directly from Yahoo Pipes, or exported from Yahoo Pipes and then stored locally in a Scraperwiki database table.
When I load that page in a browser I can see that it makes an ajax request for:
http://pipes.yahoo.com/pipes/ajax.pipe.load?id=XgRo96h13BGtJWvS8SvLAg&_out=json&modinfo=true&rnd=7560&.crumb=MjvGjpzhPLl
That's your object but I'm not sure if I'm answering your question of how to "get it". If you need to get it through a program you would need a script that loges into pipes and extracts that url.
A quick way, while not automated, is to use an HTTP analyzer. Here's a process for getting the object using HttpFox (I use v0.8.9) for Firefox. With the analyzer running, load the edit page for a pipe, like the one you linked:
http://pipes.yahoo.com/pipes/pipe.edit?_id=XgRo96h13BGtJWvS8SvLAg
Look at the request with a URL that starts with:
http://pipes.yahoo.com/pipes/ajax.pipe.load?id=....
Next, explore the content of the request (there's a 'Content' tab in HttpFox). That's the JSON object representing the pipe structure.
Use pipe.run?[your pipe id here]&_render=json as opposed to pipe.edit
So in your case to get the json it would be - http://pipes.yahoo.com/pipes/pipe.run?_id=XgRo96h13BGtJWvS8SvLAg&_render=json
I guess how you implement the client is dependent on what you like writing in/what other functionality you need.
You could also do it the other way around and use the web service service module to post the data to a script that can extract the json and persist it to a database. You could check out json.org.