How to process a github webhook payload in Jenkins? - json

I'm currently triggering my Jenkins builds through a GitHub webhook. How would I parse the JSON payload? If I try to parameterize my build and use the $payload variable, the GitHub webhook fails with the following error:
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/>
<title>Error 400 This page expects a form submission</title>
</head>
<body><h2>HTTP ERROR 400</h2>
<p>Problem accessing /job/Jumph-CycleTest/build. Reason:
<pre> This page expects a form submission</pre></p><hr /><i><small>Powered by Jetty://</small></i><br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
</body>
</html>
How can I get my GitHub webhook to work with a parameterized Jenkins build, and how could I then parse the webhook payload to use certain lines, such as the username of the committer, as conditionals in the build?

There are a few tricks to get this to work, and I found the (now defunct) chloky.com blog post to be helpful for most of it. Since it sounds like you've gotten the webhook communicating with your Jenkins instance at least, I'll skip over those steps for now. But, if you want more detail, just scroll past the end of my answer to see the content I was able to salvage from chloky.com - I do not know the original author and the information might be out of date but I did find it helpful.
So to summarize, you can do the following to deal with the payload:
Set up a string parameter called "payload" in your Jenkins job. If you are planning on manually running the build, it might be a good idea to give it a default JSON document at some point but you don't need one right now. This parameter name appears to be case-sensitive (I'm using Linux so that's no surprise...)
Set up the webhook in github to use the buildWithParameters endpoint instead of the build endpoint, i.e.
http://<<yourserver>>/job/<<yourjob>>/buildWithParameters?token=<<yourtoken>>
Configure your webhook to use application/x-www-form-encoded instead of application/json. The former approach packs the JSON data in a form variable called "payload", which is presumably how Jenkins can assign it to an environment variable. The application/json approach just POSTs raw JSON which does not seem to be mappable to anything (I couldn't get it to work). You can see the difference by pointing your webhook to something like requestbin and inspecting the results.
At this point, you should get your $payload variable when you kick off the build. To parse the JSON, I highly recommend installing jq on your Jenkins server and try out some of the parsing syntax here. JQ is especially nice because it's cross-platform.
From here, just parse what you need from the JSON into other environment variables. Combined with conditional build steps, this could give you a lot of flexibility.
Hope this helps!
EDIT here's what I could grab from the original blog posts at http://chloky.com/tag/jenkins/, which has been dead for a while.
Hopefully this content is also useful for someone.
Post #1 - July 2012
Github provides a nice way to fire off notifications to a CI system like jenkins whenever a commit is made against a repository. This is really useful for kicking off build jobs in jenkins to test the commits that were just made on the repo. You simply need to go to the administration section of the repository, click on service hooks on the left, click ‘webhook URLs’ at the top of the list, and then enter the URL of the webhook that jenkins is expecting (look at this jenkins plugin for setting up jenkins to receive these hooks from github).
Recently though, I was looking for a way to make a webhook fire when a pull request is made against a repo, rather than when a commit is made to the repo. This is so that we could have jenkins run a bunch of tests on the pull request, before deciding whether to merge the pull request in – useful for when you have a lot of developers working on their own forks and regularly submitting pull requests to the main repo.
It turns out that this is not as obvious as one would hope, and requires a bit of messing about with the github API.
By default, when you configure a github webhook, it is configured to only fire when a commit is made against a repo. There is no easy way to see, or change, this in the github web interface when you set up the webhook. In order to manipulate the webhook in any way, you need to use the API.
To make changes on a repo via the github API, we need to authorize ourselves. We’re going to use curl, so if we wanted to we could pass our username and password each time, like this:
# curl https://api.github.com/users/mancdaz --user 'mancdaz'
Enter host password for user 'mancdaz':
Or, and this is a much better option if you want to script any of this stuff, we can grab an oauth token and use it in subsequent requests to save having to keep entering our password. This is what we’re going to do in our example. First we need to create an oauth authorization and grab the token:
curl https://api.github.com/authorizations --user "mancdaz" \
--data '{"scopes":["repo"]}' -X POST
You will be returned something like the following:
{
"app":{
"name":"GitHub API",
"url":"http://developer.github.com/v3/oauth/#oauth-authorizations-api"
},
"token":"b2067d190ab94698a592878075d59bb13e4f5e96",
"scopes":[
"repo"
],
"created_at":"2012-07-12T12:55:26Z",
"updated_at":"2012-07-12T12:55:26Z",
"note_url":null,
"note":null,
"id":498182,
"url":"https://api.github.com/authorizations/498182"
}
Now we can use this token in subsequent requests for manipulating our github account via the API. So let’s query our repo and find the webhook we set up in the web interface earlier:
# curl https://api.github.com/repos/mancdaz/mygithubrepo/hooks?access_token=b2067d190ab94698592878075d59bb13e4f5e96
[
{
"created_at": "2012-07-12T11:18:16Z",
"updated_at": "2012-07-12T11:18:16Z",
"events": [
"push"
],
"last_response": {
"status": "unused",
"message": null,
"code": null
},
"name": "web",
"config": {
"insecure_ssl": "1",
"content_type": "form",
"url": "http://jenkins-server.chloky.com/post-hook"
},
"id": 341673,
"active": true,
"url": "https://api.github.com/repos/mancdaz/mygithubrepo/hooks/341673"
}
]
Note the important bit from that json output:
"events": [
"push"
]
This basically says that this webhook will only trigger when a commit (push) is made to the repo. The github API documentation describes numerous different event types that can be added to this list – for our purposes we want to add pull_request, and this is how we do it (note that we get the id of the webhook from the json output above. If you have multiple hooks defined, your output will contain all these hooks so be sure to get the right ID):
# curl https://api.github.com/repos/mancdaz/mygithubrepo/hooks/341673?access_token=b2067d190ab94698592878075d59bb13e4f5e96 -X PATCH --data '{"events": ["push", "pull_request"]}'
{
"created_at": "2012-07-12T11:18:16Z",
"updated_at": "2012-07-12T16:03:21Z",
"last_response": {
"status": "unused",
"message": null,
"code": null
},
"events": [
"push",
"pull_request"
],
"name": "web",
"config": {
"insecure_ssl": "1",
"content_type": "form",
"url": "http://jenkins-server.chloky.com/post-hook"
},
"id": 341673,
"active": true,
"url": "https://api.github.com/repos/mancdaz/mygithubrepo/hooks/341673"
}
See!
"events": [
"push",
"pull_request"
],
This webhook will now trigger whenever either a commit OR a pull request is made against our repo. Exactly what you do in your jenkins/with this webhook is up to you. We use it to kick off a bunch of integration tests in jenkins to test the proposed patch, and then actually merge and close (again using the API) the pull request automatically. Pretty sweet.
Post #2 - September 2012
In an earlier post, I talked about configuring the github webhook to fire on a pull request, rather than just a commit. As mentioned, there are many events that happen on a github repo, and as per the github documentation, a lot of these can be used to trigger the webhook.
Regardless of what event you decide to trigger on, when the webhook fires from github, it essentially makes a POST to the URL configured in the webhook, including a json payload in the body. The json payload contains various details about the event that caused the webhook to fire. An example payload that fired on a simple commit can be seen here:
payload
{
"after":"c04a2b2af96a5331bbee0f11fe12965902f5f571",
"before":"78d414a69db29cdd790659924eb9b27baac67f60",
"commits":[
{
"added":[
"afile"
],
"author":{
"email":"myemailaddress#mydomain.com",
"name":"Darren Birkett",
"username":"mancdaz"
},
"committer":{
"email":"myemailaddress#mydomain.com",
"name":"Darren Birkett",
"username":"mancdaz"
},
"distinct":true,
"id":"c04a2b2af96a5331bbee0f11fe12965902f5f571",
"message":"adding afile",
"modified":[
],
"removed":[
],
"timestamp":"2012-09-03T02:35:59-07:00",
"url":"https://github.com/mancdaz/mygithubrepo/commit/c04a2b2af96a5331bbee0f11fe12965902f5f571"
}
],
"compare":"https://github.com/mancdaz/mygithubrepo/compare/78d414a69db2...c04a2b2af96a",
"created":false,
"deleted":false,
"forced":false,
"head_commit":{
"added":[
"afile"
],
"author":{
"email":"myemailaddress#mydomain.com",
"name":"Darren Birkett",
"username":"mancdaz"
},
"committer":{
"email":"myemailaddress#mydomain.com",
"name":"Darren Birkett",
"username":"mancdaz"
},
"distinct":true,
"id":"c04a2b2af96a5331bbee0f11fe12965902f5f571",
"message":"adding afile",
"modified":[
],
"removed":[
],
"timestamp":"2012-09-03T02:35:59-07:00",
"url":"https://github.com/mancdaz/mygithubrepo/commit/c04a2b2af96a5331bbee0f11fe12965902f5f571"
},
"pusher":{
"email":"myemailaddress#mydomain.com",
"name":"mancdaz"
},
"ref":"refs/heads/master",
"repository":{
"created_at":"2012-07-12T04:17:51-07:00",
"description":"",
"fork":false,
"forks":1,
"has_downloads":true,
"has_issues":true,
"has_wiki":true,
"name":"mygithubrepo",
"open_issues":0,
"owner":{
"email":"myemailaddress#mydomain.com",
"name":"mancdaz"
},
"private":false,
"pushed_at":"2012-09-03T02:36:06-07:00",
"size":124,
"stargazers":1,
"url":"https://github.com/mancdaz/mygithubrepo",
"watchers":1
}
}
This entire payload gets passed in the POST requests as a single parameter, with the imaginative title payload. It contains a ton of information about the event that just happened, all or any of which can be used by jenkins when we build jobs after the trigger. In order to use this payload in Jenkins, we have a couple of options. I discuss one below.
Getting the $payload
In jenkins, when creating a new build job, we have the option of specifying the names of parameters that we expect to pass to the job in the POST that triggers the build. In this case, we would pass a single parameter payload, as seen here:
Passing parameters to a jenkins build job
Further down in the job configuration, we can specify that we would like to be able to trigger the build remotely (ie. that we want to allow github to trigger the build by posting to our URL with the payload):
Then, when we set up the webhook in our github repo (as described in the first post), we give it the URL that jenkins tells us to:
You can’t see it all in the screencap, but the URL I specified for the webhook was the one that jenkins told me to:
http://jenkins-server.chloky.com:8080/job/mytestbuild//buildWithParameters?token=asecuretoken
Now, when I built my new job in jenkins, for the purposes of this test I simply told it to echo out the contents of the ‘payload’ parameter (which is available in paramterized builds as a shell variable of the same name), using a simple script:
#!/bin/bash
echo "the build worked! The payload is $payload"
Now to test the whole thing we simply have to make a commit to our repo, and then pop over to jenkins to look at the job that was triggered:
mancdaz#chloky$ (git::master)$ touch myfile
mancdaz#chloky$ (git::master) git add myfile
mancdaz#chloky$ (git::master) git commit -m 'added my file'
[master 4810490] added my file
0 files changed, 0 insertions(+), 0 deletions(-)
create mode 100644 myfile
mancdaz#chloky$ (git::master) git push
Counting objects: 3, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (2/2), 232 bytes, done.
Total 2 (delta 1), reused 0 (delta 0)
To git#github.com:mancdaz/mygithubrepo.git
c7ecafa..4810490 master -> master
And over in our jenkins server, we can look at the console output of the job that was triggered, and lo and behold there is our ‘payload’ contained in the $payload variable and available for us to consume:
So great, all the info about our github event is here. and fully available in our jenkins job! True enough, it’s in a big json blob, but with a bit of crafty bash you should be good to go.
Of course, this example used a simple commit to demonstrate the principles of getting at the payload inside jenkins. As we discussed in the earlier post, a commit is one of many events on a repo that can trigger a webhook. What you do inside jenkins once you’ve triggered is up to you, but the real fun comes when you start interacting with github to take further actions on the repo (post comments, merge pull requests, reject commits etc) based on the results of your build jobs that got triggered by the initial event.
Look out for a subsequent post where I tie it all together and show you how to process, run tests for, and finally merge a pull request if successful – all automatically inside jenkins. Automation is fun!

There is a Generic Webhook Trigger plugin that can contribute values from the post content to the build.
If the post content is:
{
"app":{
"name":"GitHub API",
"url":"http://developer.github.com/v3/oauth/#oauth-authorizations-api"
}
}
You can configure it like this:
And when triggering with some post content:
curl -v -H "Content-Type: application/json" -X POST -d '{ "app":{ "name":"GitHub API", "url":"http://developer.github.com/v3/oauth/" }}' http://localhost:8080/jenkins/generic-webhook-trigger/invoke?token=sometoken
It will resolv variables and make them available in the build job.
{
"status":"ok",
"data":{
"triggerResults":{
"free":{
"id":2,
"regexpFilterExpression":"",
"regexpFilterText":"",
"resolvedVariables":{
"app_name":"GitHub API",
"everything_app_url":"http://developer.github.com/v3/oauth/",
"everything":"{\"app\":{\"name\":\"GitHub API\",\"url\":\"http://developer.github.com/v3/oauth/\"}}",
"everything_app_name":"GitHub API"
},
"searchName":"",
"searchUrl":"",
"triggered":true,
"url":"queue/item/2/"
}
}
}
}

Related

Subscribe to blocks in Solana (JSON RPC API)

I have been reading old blocks from the solana JSON RPC API (using python), but now I am trying to subscribe to the block production on the solana network (to get up live updates).
I tried to pull updates through the RPC API using
{"jsonrpc": "2.0", "id": "1", "method": "blockSubscribe", "params": ["all"]}
This doesn't work, with response: 'code': -32601, 'message': 'Method not found'
Looking at the docs.solana.com info, it states that:
This subscription is unstable and only available if the validator was
started with the --rpc-pubsub-enable-block-subscription flag. The
format of this subscription may change in the future
I assume this means I need to run solana-test-validator --rpc-pubsub-enable-block-subscription, but this just returns:
error: Found argument '--rpc-pubsub-enable-block-subscription' which wasn't expected, or isn't valid in this context
Did you mean --rpc-port?
USAGE:
solana-test-validator --ledger <DIR> --rpc-port <PORT>
I can't seem to find any more information on how to subscribe to blocks using the RPC.
Any ideas or help with what I'm doing wrong?
Thanks in advance
You are correct that the validator has to run --rpc-pubsub-enable-block-subscription. For mainnet-beta usage, it is recommended to either find a private rpc with this enabled or have your own. Please note though, the method is marked currently as unstable.
It looks like rpc-pubsub-enable-block-subscription is not available on the test validator. You can find the full command list here.
This subscription is unstable and only available if the validator was started with the --rpc-pubsub-enable-block-subscription flag. The format of this subscription may change in the future

What is a useful Azure IoT Hub JSON message structure for consumption in Time Series Insights

The title sounds quite comprehensive, but my baseline question is quite simple, I guess.
Context
I Azure, I have an IoT hub, which I am sending messages to. I use a modified version one of the samples from the Azure Iot SDK for python.
Sending works fine. However, instead of a string, I send a JSON structure.
When I watch the events flowing into the IoT hub, using the Cloud shell, it looks like this:
PS /home/marcel> az iot hub monitor-events --hub-name weathertestiothub
This extension 'azure-cli-iot-ext' is deprecated and scheduled for removal. Please remove and add 'azure-iot' instead.
Starting event monitor, use ctrl-c to stop...
{
"event": {
"origin": "raspberrypi-zero-wh",
"payload": "{ \"timestamp\": \"1608643863720\", \"locationDescription\": \"Attic\", \"temperature\": \"21.941\", \"relhumidity\": \"71.602\" }"
}
}
Issue
The data seems fine, except the payload looks strange here. BUT, the payload is literally what I send from the device, using the SDK sample.
Is this the correct way to do it? At the end, I have a very hard time to actually get the data into the Time Series Insights model. So I guess, my structure is to be blamed.
Question
What is a recommended JSON data structure to send to the IoT hub for later use?
You should add the following 2 lines to your message in your python SDK sample:
msg.content_encoding = "utf-8"
msg.content_type = "application/json"
This should resolve your formatting concern.
We've also updated our samples to reflect this: https://github.com/Azure/azure-iot-sdk-python/blob/master/azure-iot-device/samples/sync-samples/send_message.py
I ended up using the tip by #elhorton, but it was not the key change. Nonetheless, the formatting in the Azure Shell Monitor looks now much better:
"event": {
"origin": "raspberrypi-zero-wh",
"payload": {
"temperature": 21.543947753906245,
"humidity": 69.22964477539062,
"locationDescription": "Attic"
}
}
The key was:
include the message source time in ISO format
from datetime import datetime
timestampIso = datetime.now().isoformat()
message.custom_properties["iothub-creation-time-utc"] = timestampIso
Using the locationDescription as the Time Series ID Property See https://learn.microsoft.com/en-us/azure/time-series-insights/how-to-select-tsid (Maybe I could also have taken the iothub-connection-device-id, but I did not test that alone specifically)
I guess using "iothub-connection-device-id" will make "raspberrypi-zero-wh" as the name of the time series instance. I agree with your choice of using "locationDescription" as TSID; so Attic becomes the time series instance name, temperature and humidity will be your variables.

Unable to receive Forge webhooks, or unable to get them to fire

I'm setting up an automated system to convert and visualise 3D models through the Forge APIs. The actual conversion and visualisation is pretty straight forward, but keeping track of the process is not as simple.
Autodesk recommends using webhooks, but documentation of this is quite sparse.
My main problem is that I'm unable to debug the webhooks. I get no indication to weather a hook has been posted or not.
I've read all of the similar questions here on stack overflow, in the FAQ and in the documentation (among others: Why is webhook workflow not taken into consideration when creating modelderivative job?).
I'm processing a conversion for a model with 'modelId'. And want to listen to the events 'extraction.updated'.
I'm registering a hook with a POST like this:
{
"callbackUrl":"https://my-service.com/callbacks/modelId",
"scope":{
"workflow":"modelId"
}
}
My job is registered like this:
{
"input":{
"urn":"{theUrnForTheModel}"
},
"output":{
"formats":[
{
"type":"svf",
"views":[
"3d",
"2d"
]
}
]
},
"misc":{
"workflow":"modelId"
}
}
From what I can see the hooks never fire. I don't get any errors or indications that something fail on my server.
Am I required to post hookAttribute when creating the hook? This is documented as not mandatory. Am I required to have a fixes endpoint on my end, or is it ok to include the specific model id in the url?
A few points to check:
What's the response on POST hook? Should return 201
Which verb does your /callbacks/modelId accepts? Should accept POST
Have you tried extraction.finished event?

HTTP ReST: update large collections: better approach than JSON PATCH?

I am designing a web service to regularly receive updates to lists. At this point, a list can still be modeled as a single entity (/lists/myList) or an actual collection with many resources (/lists/myList/entries/<ID>). The lists are large (millions of entries) and the updates are small (often less than 10 changes).
The client will get web service URLs and lists to distribute, e.g.:
http://hostA/service/lists: list1, list2
http://hostB/service/lists: list2, list3
http://hostC/service/lists: list1, list3
It will then push lists and updates as configured. It is likely but undetermined if there is some database behind the web service URLs.
I have been researching and it seems a HTTP PATCH using the JSON patch format is the best approach.
Context and examples:
Each list has an identifying name, a priority and millions of entries. Each entry has an ID (determined by the client) and several optional attributes. Example to create a list "requiredItems" with priority 1 and two list entries:
PUT /lists/requiredItems
Content-Type: application/json
{
"priority": 1,
"entries": {
"1": {
"color": "red",
"validUntil": "2016-06-29T08:45:00Z"
},
"2": {
"country": "US"
}
}
}
For updates, the client would first need to know what the list looks like now on the server. For this I would add a property "revision" to the list entity.
Then, I would query this attribute:
GET /lists/requiredItems?property=revision
Then the client would see what needs to change between the revision on the server and the latest revision known by the client and compose a JSON patch. Example:
PATCH /list/requiredItems
Content-Type: application/json-patch+json
[
{ "op": "test", "path": "revision", "value": 3 },
{ "op": "add", "path": "entries/3", "value": { "color": "blue" } },
{ "op": "remove", "path": "entries/1" },
{ "op": "remove", "path": "entries/2/country" },
{ "op": "add", "path": "entries/2/color", "value": "green" },
{ "op": "replace", "path": "revision", "value": 10 }
]
Questions:
This approach has the drawback of slightly less client support due to the not-often-used HTTP verb PATCH. Is there a more compatible approach without sacrificing HTTP compatibility (idempotency et cetera)?
Modelling the individual list entries as separate resources and using PUT and DELETE (perhaps with ETag and/or If-Match) seems an option (PUT /lists/requiredItems/entries/3, DELETE /lists/requiredItems/entries/1 PUT /lists/requiredItems/revision), but how would I make sure all those operations are applied when the network drops in the middle of an update chain? Is a HTTP PATCH allowed to work on multiple resources?
Is there a better way to 'version' the lists, perhaps implicitly also improving how they are updated? Note that the client determines the revision number.
Is it correct to query the revision number with GET /lists/requiredItems?property=revision? Should it be a separate resource like /lists/requiredItems/revision? If it should be a separate resource, how would I update it atomically (i.e. the list and revision are both updated or both not updated)?
Would it work in JSON patch to first test the revision value to be 3 and then update it to 10 in the same patch?
This approach has the drawback of slightly less client support due to the not-often-used HTTP verb PATCH.
As far as I can tell, PATCH is really only appropriate if your server is acting like a dumb document store, where the action is literally "please update your copy of the document according to the following description".
So if your resource really just is a JSON document that describes a list with millions of entries, then JSON-Patch is a great answer.
But if you are expecting that the patch will, as a side effect, update an entity in your domain, then I'm suspicious.
Is a HTTP PATCH allowed to work on multiple resources?
RFC 5789
The PATCH method affects the resource identified by the Request-URI, and it also MAY have side effects on other resources
I'm not keen on querying the revision number; it doesn't seem to have any clear advantage over using an ETag/If-Match approach. Some obvious disadvantages - the caches between you and the client don't know that the list and the version number are related; a cache will happily tell a client that version 12 of the list is version 7, or vice versa.
Answering my own question. My first bullet point may be opinion-based and, as has been pointed out, I've asked many questions in one post. Nevertheless, here's a summary of what was answered by others (VoiceOfUnreason) and my own additional research:
ETags are HTTP's resource 'hashes'. They can be combined with If-Match headers to have a versioning system. However, ETag-headers are normally not used to declare the ETag of a resource that is being created (PUT) or updated (POST/PATCH). The server storing the resource usually determines the ETag. I've not found anything explicitly forbidding this, but many implementations may assume that the server determines the ETag and get confused when it is provided with PUT or PATCH.
A separate revision resource is a valid alternative to ETags for versioning. This resource must be updated at the same time as the resource it is the revision of.
It is not semantically enforceable on a HTTP level to have commit/rollback transactions, unless by modelling the transaction itself as a ReST resource, which would make things much more complicated.
However, some properties of PATCH allow it to be used for this:
A HTTP PATCH must be atomic and can operate on multiple resources. RFC 5789:
The server MUST apply the entire set of changes atomically and never provide (e.g., in response to a GET during this operation) a partially modified representation. If the entire patch document cannot be successfully applied, then the server MUST NOT apply any of the changes.
The PATCH method affects the resource identified by the Request-URI, and it also MAY have side effects on other resources; i.e., new resources may be created, or existing ones modified, by the application of a PATCH. PATCH is neither safe nor idempotent
JSON PATCH can consist of multiple operations on multiple resources and all must be applied or none must be applied, making it an implicit transaction. RFC 6902: Operations are applied sequentially in the order they appear in the array.
Thus, the revision can be modeled as a separate resource and still be updated at the same time. Querying the current revision is a simple GET. Committing a transaction is a single PATCH request containing first a test of the revision, then the operations on the resource(s) and finally the operation to update the revision resource.
The server can still choose to publish the revision as ETag of the main resource.

printing list of failed tests in jenkins in post build script

I have a jenkins job that executes some junit tests, and a post build step that calls a bash script if tests fail(actually checks if the log contains "Tests in error:"). The script will then send some text(making use of some of jenkins' environment variables) to a skype bot.
Now I want to have the bot say which tests actually failed.
I've been looking through the plugin and google but so far I've come up with no way to tell which tests fail.
Is there a way to pass the failed tests to the script, or set environment vars after/durings the build?
We have a jenkins job that executes unit tests, and has a post build step to "Publish JUnit test report".
That gives us a page that shows test results, including failed tests. One option for your script is to fetch that page and screen scrape the failed test names out of it (they're buried in a "showFailuresLink" function in our installation).
The page is also accessible via the REST API. I played with it a little and found a request like this:
http://host:7098/jenkins/job/ExecuteUnitTests/166/testReport/api/json?pretty=true&tree=suites[cases[className,name,age,status]]
gives a result like this:
{
"suites" : [
{
"cases" : [
{
"age" : 0,
"className" : "com.e.authentication.SimpleAuthenticationInfoTest",
"name" : "testGetCredentialKeys",
"status" : "PASSED"
},
{
"age" : 0,
"className" : "com.e.authentication.SimpleAuthenticationInfoTest",
"name" : "testGetExpirationInterval",
"status" : "PASSED"
},
...
Simple enough to parse out the entries where age is > 0 or status != PASSED. There may be a way to refine the request even further to get just what you want, but I'm out of time to play with it.