Build job if some specific last job build was successful - hudson

I have a job A that is run every hour. Also job B is run after each commit to github (integration tests job). How can i know before running job A if last build of job B was successful and discard build of A if last build of B was unstable?
Thanks.

As far as I know, this is not possible when you use hudson out of the box. Without any specifics about your job dependencies it is also not easy to design the right workaround.
Different Options:
If your job A runs fast, let it run anyway.
Since job A runs every hour, can you go away with job B running every hour. In this case Job B is successful it will trigger job A.
Have an external shell script that triggers job A every hour. Before triggering, check the status of your last build from job B (http:///job//api/xml?xpath=/mavenModuleSetBuild/result/text%28%29). For info on how to trigger a build have a look at the "Trigger builds remotely" option in your job.
This list is probably not exhaustive.

Related

Re-run failed Github Actions Job after updating code related to that job

I have a workflow that runs about 4 jobs. Job #4 tends to have lots of bugs upon updates. For rapid testing's sake, I would like to be able to update the code related to job #4, then re-run only job #4. I know GHA allows you to do this like so...
...but this only just re-runs the job on the same commit (i.e. my new code doesn't get implemented in this re-run of failed job #4.
A similar question can be found here, but this is for open PRs. I would like to do this on a branch without having a PR just yet.
How can I update code, push it to a branch, then re-run my failed job using the updated code?
As you already mentioned, the re-run feature only allows re-running the SAME commit. There is no way around that in a single repository.
Using the push trigger should be just as fast as using re-run. If you need to trigger a specific event, there is not really an option available.
You could take a look at nektos/act, which enables you to run GitHub Actions on your local machine within a docker container. This would be way faster than actually running it in the GitHub repository.

Is there any way to link two Cron Jobs in OpenShift

everyone!
Is there any way to link two Cron Jobs in OpenShift? I have two Cron Jobs. The first one makes a .tar archive from some data. And the second one should operate with this archive. Can I add some condition to the second Cron Job, to make it run only if the first one has finished? The first Cron Job can run from several seconds to several hours, so it is not very comfortable to guess certain time interval, to be sure that it is completed.
Will be thankful for any idea.
Share the archive file via persist volume, and in the second job readiness probe to check if the jar file exists, then make decision to go ahead or abort for next check, archive the archive file to other location after finish.
With Kubernetes API to schedule the second job in the first job

Stop building upstrea hudson job if downstream hudson jobs are running

I have setup hudson jobs in the following way:
Job A triggers Job B, C, D
Job A is pushing a new build every 6 hrs and then trigger Job B, C, D which will run test scripts on that build.
But sometimes, Job B, C, D takes more time, sometimes more than 6hrs. In this case of Job A pushes a new build, the tests results will be messed up with 2 builds.
So, I wanted to know if there is a way in Hudson to check if the downstream jobs are running, it Yes, then block the upstream job till the downstream jobs complete.
Your job A is probably configured to trigger the jobs B,C,D in the 'Post-build Actions'. It means that jobs B,C,D will be started after the job A completion.
You may try to trigger the B,C and D as an additional build step ( using 'Trigger/call builds on other projects' build step). You'll then have an option to wait till the downstream job is completed (just check the 'Block until the triggered projects finish their builds').
If you'd like to trigger all the B,C and D simultaneously just specify all of them in the same line ( the 'Projects to build' parameter of the 'Trigger/call builds on other projects' step). Otherwise you may add multiple such build steps for each job

Trigger periodic job in Hudson only if last execution of another job is successful

I have a job NIGHTLY that runs one time each night by a periodical timer.
Now I want to change so that the NIGHTLY job is only run if the last execution of another job FOO in Hudson is successful.
Note:
Job FOO is run many times each day and is triggered by SCM.
NIGHTLY should only be run one time per night and at a specific time.
Currently I have another job NIGHTLY_TRIGGER that runs a bash script that access the remote API of job FOO to check if job FOO is successful and if so triggers the NIGHTLY job.
Are there a nicer/cleaner way to do this? (preferable by using some Hudson plugins)
You could check out the Hudson Join Plugin which is made for this kind of scenario (wait for the conclusion of a job before executing another one).
The end result wouldn't be much different from what you are already doing, but this would be neatly parameterized:
So you would still have to check the status of FOO job, but at least you would check it right after FOO job completion.

Hudson pipelines

Can anyone help with this problem?
I have a test job, a downstream job and a join job.
I only want the join job to run if the downstream job succeeds.
If the test job fails and the downstream job succeeds I still want to run the join job.
Anyone know of a plugin that can help here?
The join plugin is not good enough because I can configure it to run join job when test AND downstream succeed, or run join regardless of either jobs success/failure. But not run join job ONLY if downstream succeeds.
Why do I want to do it this way? I want to pipeline jobs together but only if a common "downstream" job succeeds. If it fails then I want the pipeline to "break".
Adding more info to original question:
So I have a set of tests (Test.1, Test.2, Test.3). I can run them individually from Hudson, they run, produce a result and finish. I also want to be able to run them as part of a pipeline. Test.1 runs, finishes and then runs Test.2. etc So I have two separate ways that I can run Test.1. Individual or as part of a pipeline. To help out here I have made Test.1, Test.2 etc parameterized (true/false). By default the param is false. So when I run Test.1 by default (false) the test runs and finishes. When I run Test.1 with param True I would like it to run Test.2. This second bit I cant seem to do
Many thanks
John
Similar to Gareth_bowles, I would just chain all jobs (no join) and use the Hudson Parameterized Trigger plugin to start a dependent job even if the current job failed. Only disadvantage is, that you don't have jobs running parallel.
On a second thought, you can use the Hudson Parameterized Trigger plugin to run a temporary job after the test job, regardless of the success of the test job. The temporary job will always succeed (because it actually does nothing else than to trigger the join job. This way your test job (from the view of the join job) will always succeed and only the downstream job determines if the join job runs.
Edit
After understanding what you really want to do, namely run Test.N independently or as part of a chain, I would go with my first suggestion. That means Test.N always Triggers the Downstream.N job, regardless whether Test.N was successful or failed. You need the Hudson Parameterized Trigger plugin And configure two triggers. The first one triggers The dependent job when the test job is successful or unstable and the second trigger also triggers the dependent job but only when the test job fails. Don't forget to pass your parameters on and you are done. Not very complicated.
Can't you just skip the join plugin and make the join job solely dependent on the downstream job ? That would satisfy your requirement of making the join job run only when the downstream job succeeds, as long as you make sure the "Trigger even if the build is unstable" box is unchecked on the dependency definition in the downstream build.