Jenkins parameterized job that only queues one build - hudson

Imagine a Jenkins job A which takes 1 minute to run, and job B which takes 5 minutes.
If we configure job A to trigger job B, while job B is running job A may run 5 times before B completes. However, Jenkins doesn't add 5 builds to job B's queue, which is great because otherwise speedy job A would be creating an ever-growing backlog of builds for poor slow job B.
However, now we want to have job A trigger B as a parameterized job, using the parameterized trigger plugin. Parameterized jobs do queue up a backlog, which means job A is happily creating a huge pile of builds for job B, which can't possibly keep up.
It does make sense to add a new parameterized build to the queue each time it's triggered, since the parameters may be different. Jenkins should not always assume that a new parameterized build renders previously queued ones unnecessary.
However, in our case we actually would like this. Job A builds and packages our application, then Job B deploys it to a production-like environment and runs a heavier set of integration tests. We also have a build C which deploys to another environment and does even more testing, so this is an escalating pattern for us.
We would like the queue for our parameterized job B to only keep the last build added to it; each new build would replace any job currently in the queue.
Is there any nice way to achieve this?

Add a "System Groovy Script" pre-build step to job B that checks for (newer) queued jobs of the same name, and bails out if found:
def name = build.properties.environment.JOB_NAME
def queue = jenkins.model.Jenkins.getInstance().getQueue().getItems()
if (queue.any{ it.task.getName() == name }) {
println "Newer " + name + " job(s) in queue, aborting"
build.doStop()
} else {
println "No newer " + name + " job(s) in queue, proceeding"
}

You could get rid of Parameterized Trigger Plugin, and instead, use the traditional triggering. As you said, this would prevent job B queue from piling up.
How to pass the parameters from A to B then? Make job A to yield the parameters in it's console output. In job B, to get these build parameters, examine the console output of the latest A build (with a Python script, perhaps?).

Ron's solution worked for me. If you don't like having bunch of cancelled builds in build history you can add the following system groovy script to job A before you trigger job B:
import hudson.model.*
def q = jenkins.model.Jenkins.getInstance().getQueue()
def items = q.getItems()
for (i=0;i<items.length;i++){
if(items[i].task.getName() == "JobB"){
items[i].doCancelQueue()
}
}

Here's one workaround:
Create a job A2B between jobs A and B
Add a build step in job A2B that determines whether B is running. To achieve that, check:
Determine if given job is currently running using Hudson/Jenkins API
Python API's is_queued_or_running()
Finally, trigger job B from A2B only if there are no B builds queued or running (carrying the parameters through)

In case you're using Git, this is now supported by the "Combine Queued git hashes" under the Triggering/ Parameters/ Pass-through option.
The first Git plugin version that should actually work with this is 1.1.27 (see Jenkins-15160)

Here's a more flexible option if you are only care about a few parameters matching. This is especially helpful when a job is triggered externally (i.e. from GitHub or Stash) and some parameters don't need a separate build.
If the checked parameters match in both value and existence in both the current build and a queued build, the current build will be aborted and the description will show that a future build contains the same checked parameters (along with what they were).
It could be modified to cancel all other queued jobs except the last one if you don't want to have build history showing the aborted jobs.
checkedParams = [
"PARAM1",
"PARAM2",
"PARAM3",
"PARAM4",
]
def buildParams = null
def name = build.project.name
def queuedItems = jenkins.model.Jenkins.getInstance().getQueue().getItems()
yieldToQueuedItem = false
for(hudson.model.Queue.Item item : queuedItems.findAll { it.task.getName() == name }) {
if(buildParams == null) {
buildParams = [:]
paramAction = build.getAction(hudson.model.ParametersAction.class)
if(paramAction) {
buildParams = paramAction.getParameters().collectEntries {
[(it.getName()) : it.getValue()]
}
}
}
itemParams = [:]
paramAction = item.getAction(hudson.model.ParametersAction.class)
if(paramAction) {
itemParams = paramAction.getParameters().collectEntries {
[(it.getName()) : it.getValue()]
}
}
equalParams = true
for(String compareParam : checkedParams) {
itemHasKey = itemParams.containsKey(compareParam)
buildHasKey = buildParams.containsKey(compareParam)
if(itemHasKey != buildHasKey || (itemHasKey && itemParams[compareParam] != buildParams[compareParam])) {
equalParams = false
break;
}
}
if(equalParams) {
yieldToQueuedItem = true
break
}
}
if (yieldToQueuedItem) {
out.println "Newer " + name + " job(s) in queue with matching checked parameters, aborting"
build.description = "Yielded to future build with:"
checkedParams.each {
build.description += "<br>" + it + " = " + build.buildVariables[it]
}
build.doStop()
return
} else {
out.println "No newer " + name + " job(s) in queue with matching checked parameters, proceeding"
}

The following is based on Ron's solution, but with some fixes to work on my Jenkins 2 including removing java.io.NotSerializableException exception and handling that the format of getName() is some times different from that of JOB_NAME
// Exception to distinguish abort due to newer jobs in queue
class NewerJobsException extends hudson.AbortException {
public NewerJobsException(String message) { super(message); }
}
// Find jenkins job name from url name (which is the most consistently named
// field in the task object)
// Known forms:
// job/NAME/
// job/NAME/98/
#NonCPS
def name_from_url(url)
{
url = url.substring(url.indexOf("/") + 1);
url = url.substring(0, url.indexOf("/"));
return url
}
// Depending on installed plugins multiple jobs may be queued. If that is the
// case skip this one.
// http://stackoverflow.com/questions/26845003/how-to-execute-only-the-most-recent-queued-job-in-jenkins
// http://stackoverflow.com/questions/8974170/jenkins-parameterized-job-that-only-queues-one-build
#NonCPS
def check_queue()
{
def name = env.JOB_NAME
def queue = jenkins.model.Jenkins.getInstance().getQueue().getItems()
if (queue.any{ name_from_url(it.task.getUrl()) == name }) {
print "Newer ${name} job(s) in queue, aborting"
throw new NewerJobsException("Newer ${name} job(s) in queue, aborting")
} else {
print "No newer ${name} job(s) in queue, proceeding"
}
}

Related

Undefine all shared variables on the current node

I have defined multiple shared tables in the current node. Are there any ways or built-in functions to undefine them at one time?
Try the following user-defined function to see if this script can meet your requirements:
def existsShareVariable(varName){
return objs(true).name.find(varName)>=0
}
def ClearAllSharedTables(){
sharedTables = exec name from objs(true) where form="TABLE", shared=true
for(sharedTable in sharedTables){
print("Undef Shared Table: " + sharedTable)
try{
undef(sharedTable, SHARED)
}
catch(ex){
print(ex)
}
}
print("All shared table have been cleared !")
}

how to keep all created IDs in Postman Environment

I am trying to automate API requests using postman. So first in POST request I wrote a test to store all created IDs in Environment : Which is passing correct.
var jsondata = JSON.parse(responseBody);
tests["Status code is 201"] = responseCode.code === 201;
postman.setEnvironmentVariable("BrandID", jsondata.brand_id);
Then in Delete request I call my Environment in my url like /{{BrandID}} but it is deleting only the last record. So my guess is that environment is keeping only the last ID? What must I do to keep all IDs?
Each time you call your POST request, you overwrite your environment variable
So you can only delete the last one.
In order to process multiple ids, you shall build an array by adding new id at each call
You may proceed as follows in your POST request
my_array = postman.getEnvironmentVariable("BrandID");
if (my_array === undefined) // first time
{
postman.setEnvironmentVariable("BrandID", jsondata.brand_id); // creates your env var with first brand id
}
else
{
postman.setEnvironmentVariable("BrandID", array + "," + jsondata.brand_id); // updates your env var with next brand id
}
You should end up having an environment variable like BrandId = "brand_id1, brand_id2, etc..."
Then when you delete it, you delete the complete array (but that depends on your delete API)
I guess there may be cleaner ways to do so, but I'm not an expert in Postman nor Javascript, though that should work (at least for the environment variable creation).
Alexandre

Gradle task prototyping

I'm new to Gradle and Groovy and I'm trying to define a task that executes a SQL script in MySQL. Here's what I have so far:
task executeSomeSQL(type: Exec){
def pwd = getMySQLPwd()
workingDir './'
commandLine 'mysql', '-uroot', "--password=$pwd", 'dbname'
standardInput file('database/script.sql').newInputStream()
}
Now this works as expected, however, I'd like to be able to define many such tasks that only differ in the input script that they take. In my mind, I need a way to prototype the SQL execution task with common properties (getting the password, setting the working directory and setting the command) and then define each task with its own filename. In a sort of pseudocode:
// define a function or closure? this doesn't work because the
// three task specific properties aren't defined
def sqlExecutorDef(filename){
def pwd = getMySQLPwd()
workingDir './'
commandLine 'mysql', '-uroot', "--password=$pwd", 'dbname'
standardInput file(filename).newInputStream()
}
// this is truly pseudocode: somehow the task should be created
// using a function that defines it
task executeSomeSQL(type: Exec) = sqlExecutorDef('database/script.sql')
In this way, I could define many tasks, one per SQL script that needs to be executed, with a one-liner.
EDIT: this is probably trivial for somebody with more Groovy experience. I apologize!
Though this may not be standard Gradle, dynamic tasks might help out here. The example below uses a list both as task names and (with some massaging) sql files: (it simply prints to the console, but executing the SQL should be straight-forward given your original work):
def username = "admin"
def password = "swordfish"
def taskNames = ["abc_sql", "def_sql", "ijk_sql"]
taskNames.each { taskName ->
def sqlFile = taskName.replaceAll("_", ".")
task "${taskName}" (type:Exec) {
workingDir "."
commandLine "echo", "run SQL script '${sqlFile}' as ${username} / ${password}"
}
}
gradle tasks gives:
[snip]
Other tasks
-----------
abc_sql
def_sql
ijk_sql
example run of 'abc_sql':
bash-3.2$ gradle abc_sql
:abc_sql
run SQL script 'abc.sql' as admin / swordfish

EM-HTTP-REQUEST and SINATRA - combining/merging multiple api request into one result?

My first time dealing with sinatra and parallel em-http-request. And i dont know how to combine/merge into one results and when to EventMachine.stop? . Consider this:
get '/data/:query' do
content_type :json
EventMachine.run do
http1 = EventMachine::HttpRequest.new('v1/').get
http2 = EventMachine::HttpRequest.new('v2/').get
http1.errback { p 'Uh oh nooooooo'; EventMachine.stop }
http1.callback {
// do some operation http1.repsonse
Crack::XML.parse(http1.response).to_json
EventMachine.stop
}
http2.callback {
// do some operation http2.response
Crack::XML.parse(http2.response).to_json
EventMachine.stop
}
end
somehow merge
return merged_result
end
Above example has a race condition - you'll stop the eventloop as soon as one of the requests has finished. To address this, you can use the built in "Multi" interface:
EventMachine.run do
multi = EventMachine::MultiRequest.new
multi.add :google, EventMachine::HttpRequest.new('http://www.google.com/').get
multi.add :yahoo, EventMachine::HttpRequest.new('http://www.yahoo.com/').get
multi.callback do
puts multi.responses[:callback]
puts multi.responses[:errback]
EventMachine.stop
end
end
See em-http wiki page for more: https://github.com/igrigorik/em-http-request/wiki/Parallel-Requests#synchronizing-with-multi-interface

Grails: can I make a validator apply to create only (not update/edit)

I have a domain class that needs to have a date after the day it is created in one of its fields.
class myClass {
Date startDate
String iAmGonnaChangeThisInSeveralDays
static constraints = {
iAmGonnaChangeThisInSeveralDays(nullable:true)
startDate(validator:{
def now = new Date()
def roundedDay = DateUtils.round(now, Calendar.DATE)
def checkAgainst
if(roundedDay>now){
Calendar cal = Calendar.getInstance();
cal.setTime(roundedDay);
cal.add(Calendar.DAY_OF_YEAR, -1); // <--
checkAgainst = cal.getTime();
}
else checkAgainst = roundedDay
return (it >= checkAgainst)
})
}
}
So several days later when I change only the string and call save the save fails because the validator is rechecking the date and it is now in the past. Can I set the validator to fire only on create, or is there some way I can change it to detect if we are creating or editing/updating?
#Rob H
I am not entirely sure how to use your answer. I have the following code causing this error:
myInstance.iAmGonnaChangeThisInSeveralDays = "nachos"
myInstance.save()
if(myInstance.hasErrors()){
println "This keeps happening because of the stupid date problem"
}
You can check if the id is set as an indicator of whether it's a new non-persistent instance or an existing persistent instance:
startDate(validator:{ date, obj ->
if (obj.id) {
// don't check existing instances
return
}
def now = new Date()
...
}
One option might be to specify which properties you want to be validated. From the documentation:
The validate method accepts an
optional List argument which may
contain the names of the properties
that should be validated. When a List
is passed to the validate method, only
the properties defined in the List
will be validated.
Example:
// when saving for the first time:
myInstance.startDate = new Date()
if(myInstance.validate() && myInstance.save()) { ... }
// when updating later
myInstance.iAmGonnaChangeThisInSeveralDays = 'New Value'
myInstance.validate(['iAmGonnaChangeThisInSeveralDays'])
if(myInstance.hasErrors() || !myInstance.save(validate: false)) {
// handle errors
} else {
// handle success
}
This feels a bit hacky, since you're bypassing some built-in Grails goodness. You'll want to be cautious that you aren't bypassing any necessary validation on the domain that would normally happen if you were to just call save(). I'd be interested in seeing others' solutions if there are more elegant ones.
Note: I really don't recommend using save(validate: false) if you can avoid it. It's bound to cause some unforeseen negative consequence down the road unless you're very careful about how you use it. If you can find an alternative, by all means use it instead.