In order to industrialize the deployments of an IBM product, I'm going to use its REST API interfaces.
I plan to use jenkins in order to orchestrate the calls to the REST APIs.
I'm still wondering if it's a good idea...?
If so, is there any way to simply parse the JSON responses in order to be able to make some conditions in the steps?
Thanks.
You didn't say what you're using the API for, but if you have the Groovy plugin, you could use JsonSlurper
Something like
import groovy.json.JsonSlurper
URL apiUrl = "https://some.website/api/someFunction".toURL()
List json = new JsonSlurper().parse(apiUrl.newReader())
// do stuff with the json object
I'm not quite sure how you would take this and use it directly for conditional build steps during the execution of the job, though.
An alternative approach is to generate a set of jobs with the appropriate steps based on the API response, using the Job DSL Plugin. This sort of thing can be used for stuff like reading a list of SCM branches and generating jobs for each of them. That may or may not be what you're trying to do.
JQ https://stedolan.github.io/jq/ is a JSON parser for bash. I have used it in the past and its beautiful.
You can download JQ to your Jenkins server, and then call JQ in your build step bash scripts
If you are using Pipeline Job, you will be happy with Pipeline Utility Steps Plugin
Related
I'd like to write an object to gcp object store, while using the x-goog-if-generation-match feature. Using #google-cloud/storage npm library, the file object does not seem to have an option for setting the required object generation.
What are the alternatives?
As you noticed, the #google-cloud/storage npm library doesn't support generation and metageneration preconditions.
As an alternative, you may use either the Storage XML API or the Storage JSON API which do support it. Depending on if you want to use one or the other, you'll be able to use preconditions via HTTP Headers or query string parameters. You'll find the whole list of those here.
Another alternative is to use some kind of optimistic locking:
get the generation id
write object
get the generation id again
repeat until generation after = generation before + 1
Is there a tool that allows me to create random JSON test data starting from a RAML file?
Example: starting from a RAML file describing an API generate random static JSON responses to register in a WireMock mock server mappings so that I can run automated tests against the API.
I'm working with Java but tools/libraries in other languages would fit too.
Thanks
Maybe one of these suits your needs:
RAML Mock Server: Library for validating MockServer calls against a RAML API specification
RAML Tester: Test if a request/response matches a given raml definition
soapui-raml-plugin
Allows you to import RAML files into SoapUI for testing your REST APIs
Allows you to generate a REST Mock Service for a RAML file being imported
Otherwise check: http://raml.org/projects/projects and filter by type 'test' and language.
Perhaps https://github.com/wavesoft/raml-lipsum might be useful.
That seems to indicate that it can do as you ask. I didn't think that the other answers seemed to generate sample requests for a raml service, which is I think what the question asks for, and I was asked today at work.
I've started using Apache Nifi and I'm still learning it and experimenting with it. I really want to use Nifi to get JSON documents from API's and put them in my Elasticsearch database. So far using the built-in getTwitter and putElasticsearch controllers this works.
However now I want to do this with other APIs than Twitter, and I'm kinda stuck here. First off I really don't even know which controller to use? I would think getHttp or invokeHttp even with 'GET' as http verb then but it doesn't seem to work. If I use the getHttp I have to give an SSL service with keystore and truststore .. like why would I have to do that?
Apache Nifi is still quite new so hard to find decent guides / information about these kinds of things. I have read and searched the documentation but haven't gotten the wiser.
An example JSON to pick up from an API is:
https://api.ssllabs.com/api/v2/getEndpointData?host=www.bnpparibasfortis.be&s=193.58.4.82
Thanks in advance for anyone that can offer some help / insight.
What processor you use to get the JSON data is entirely dependent on the API you want to hit. The GetHttp or InvokeHttp processors should work to grab the data from a URL. If you'll notice, the SSL service is an optional property for both GetHttp and InvokeHttp so you only need to you use it when you want to communicate via HTTPS. Also, from the UI you can right click on a processor and then click "usage" to bring up the documentation for that processor.
At this link[1] you can find a NiFi template that uses GetHttp to get JSON data from randomuser.me and does various processing on it. It's primarily a template to show-case the different Avro processors but the method of grabbing the JSON should be relevant.
[1] https://github.com/hortonworks-gallery/nifi-templates/blob/master/templates/Convert_To_Avro_From_CSV_and_JSON.xml
I am trying to achieve something like which is mentioned in this link
but i don't know whereto write the parsing code. I tried to write it in new method and added the method in "my adapter.xml" but nothing happens. I even don't know how to log in IBM WorkLight. I used WL.logger(some) but its throwing error that "Logger can not be called on an object".
Any help would be appreciated. Thanks in advance.!
In most cases you don't need to manually parse responses because WL adapter framework will do this for you. Anything you retrieve via WL.Server.invokeHttp API will be parsed to JSON automatically unless you specify returnedContentType:"plain". In case you DO need to manually parse JSON you can use JSON.parse() and JSON.stringify() APIs.
Server side logging is achieved via WL.Logger.debug/error/info etc. You can get more info about it here
You don't have to parse JSON data, there are libraries in Javascript to do that. Try JSON.parse(). You should learning how to write adapters and invoking them from clients. The Getting Started modules are a good place to start, specifically Module 4 in your case.
Is there a way to find it programatically? I need this as part of an automated run; So this would be very helpful if there is an existing remote API call which can give this.
You don't need to parse the HTML - most of the Hudson pages can be turned into API calls by adding URL suffix, e.g. make GET calls to:
http://hudson:8080/computer/api/json
Switch the JSON for either XML or Python if you prefer these over JSON.
If you use just the API suffix, you'll get a short generic help page on the API.
Groovy script to get all computers:
def jenkins = Jenkins.instance
def computers = jenkins.computers
computers.each{
println "${it.displayName} ${it.hostName}"
}
Look at http://hudson:8080/computer/