I have 1 upstream job and 2 parallel downstream jobs. When the upstream job succeeds, 2 downstream jobs will be triggered.
Currently, I send mail notice for every jobs separately. Not the receivers are complaining for to many mails.
I need to find out a way to gather the build result of those 3 jobs together and send 1 mail notice.
Use the parameterized trigger plugin as a build step (not as a post-build action). I believe it can wait for downstream projects to finish, examine their status, and set the current project's status accordingly.
Related
Want to understand when the pipeline job runes so I can more effectively understand the pipeline build process. Does it check the code change from master branch of the Code Repository?
Building a job on the pipeline, builds the artifact that was delivered on the instances, not what has been merged onto master.
It should be the same, but there is a checking process after the merge onto master and before the delivery of the artifact, like you would have on a regular Git/Jenkins/Artifactory.
So there is a delay.
And moreover if these checks don't pass, your change, even though merged onto master, will never appear on the pipeline.
To add a bit more precision as to what #Kevin Zhang wrote.
There's also the possibility to trigger a job using an API call, even though it's not the most common.
Also you can combine the different events to say things like
Before work hours
build only if the schedule of the morning update has succeeded
during work hours
build every hour
if an input has new data
and
if a schedule has run successfully
or another dataset has been updated
after hours
build whenever an input has new data
It can also helps you create loops, like if you have a huge amount of data coming in input B and it impacts your sync toward the ontology, or a timeserie,... , you could create a job that takes a limited number of rows from input B and log the id's of these in a table to not retake them again, you process those rows and when the output C is updated you rerun your job and when there is no more row you update output D.
You can also add a schedule on the job that produces input B from input A stating to rerun it only when output D is updated.
This would enable you to process a number of files from a source, process the data from those files chunk by chunk and then take another batch of files and iterate.
By naming your schedule functionnaly you can have a more controlled build of your pipeline and finer grain of data governance and you can also add some audit table or log tables based on these schedules, which will make debugging and auditing way more easy.
You would have a trace of when and where a specific source update has reach.
Of course, you need such precision only if your pipeline is complexe : like many different sources, updated at different time and updating multiple part of your pipeline.
For instance if you're unifying the data of your client that was before separated in many silos or if it is a multinational group of many different local or global entities, like big car manufacturers
It depends on what type of trigger you’ve set up.
If your schedule is a single cron schedule (i.e.: by scheduled time), the build would not look at the master branch repo. It'll just build according to the cron schedule.
If your schedule contains an event trigger (e.g. one of the 4 event types: Job Spec Put, Transaction Committed, Job Succeeded and Schedule Ran Successfully), then it'll trigger based on the event where only the Job Spec Put even type would trigger based on the master branch code change.
I have a set of independent SSIS packages say A,B,C. I'm running them manually and in parallel using DTEXEC command. Finishing time of these jobs can be random i.e, there is no certainty that only a particular job finishes last every time.
Now I want to send a notification mail when all the packages are completed. How can I accomplish this without modifying the packages? Also I may not be able to use task scheduler or SQL agent.
I have a job NIGHTLY that runs one time each night by a periodical timer.
Now I want to change so that the NIGHTLY job is only run if the last execution of another job FOO in Hudson is successful.
Note:
Job FOO is run many times each day and is triggered by SCM.
NIGHTLY should only be run one time per night and at a specific time.
Currently I have another job NIGHTLY_TRIGGER that runs a bash script that access the remote API of job FOO to check if job FOO is successful and if so triggers the NIGHTLY job.
Are there a nicer/cleaner way to do this? (preferable by using some Hudson plugins)
You could check out the Hudson Join Plugin which is made for this kind of scenario (wait for the conclusion of a job before executing another one).
The end result wouldn't be much different from what you are already doing, but this would be neatly parameterized:
So you would still have to check the status of FOO job, but at least you would check it right after FOO job completion.
Can anyone help with this problem?
I have a test job, a downstream job and a join job.
I only want the join job to run if the downstream job succeeds.
If the test job fails and the downstream job succeeds I still want to run the join job.
Anyone know of a plugin that can help here?
The join plugin is not good enough because I can configure it to run join job when test AND downstream succeed, or run join regardless of either jobs success/failure. But not run join job ONLY if downstream succeeds.
Why do I want to do it this way? I want to pipeline jobs together but only if a common "downstream" job succeeds. If it fails then I want the pipeline to "break".
Adding more info to original question:
So I have a set of tests (Test.1, Test.2, Test.3). I can run them individually from Hudson, they run, produce a result and finish. I also want to be able to run them as part of a pipeline. Test.1 runs, finishes and then runs Test.2. etc So I have two separate ways that I can run Test.1. Individual or as part of a pipeline. To help out here I have made Test.1, Test.2 etc parameterized (true/false). By default the param is false. So when I run Test.1 by default (false) the test runs and finishes. When I run Test.1 with param True I would like it to run Test.2. This second bit I cant seem to do
Many thanks
John
Similar to Gareth_bowles, I would just chain all jobs (no join) and use the Hudson Parameterized Trigger plugin to start a dependent job even if the current job failed. Only disadvantage is, that you don't have jobs running parallel.
On a second thought, you can use the Hudson Parameterized Trigger plugin to run a temporary job after the test job, regardless of the success of the test job. The temporary job will always succeed (because it actually does nothing else than to trigger the join job. This way your test job (from the view of the join job) will always succeed and only the downstream job determines if the join job runs.
Edit
After understanding what you really want to do, namely run Test.N independently or as part of a chain, I would go with my first suggestion. That means Test.N always Triggers the Downstream.N job, regardless whether Test.N was successful or failed. You need the Hudson Parameterized Trigger plugin And configure two triggers. The first one triggers The dependent job when the test job is successful or unstable and the second trigger also triggers the dependent job but only when the test job fails. Don't forget to pass your parameters on and you are done. Not very complicated.
Can't you just skip the join plugin and make the join job solely dependent on the downstream job ? That would satisfy your requirement of making the join job run only when the downstream job succeeds, as long as you make sure the "Trigger even if the build is unstable" box is unchecked on the dependency definition in the downstream build.
I have a job A that is run every hour. Also job B is run after each commit to github (integration tests job). How can i know before running job A if last build of job B was successful and discard build of A if last build of B was unstable?
Thanks.
As far as I know, this is not possible when you use hudson out of the box. Without any specifics about your job dependencies it is also not easy to design the right workaround.
Different Options:
If your job A runs fast, let it run anyway.
Since job A runs every hour, can you go away with job B running every hour. In this case Job B is successful it will trigger job A.
Have an external shell script that triggers job A every hour. Before triggering, check the status of your last build from job B (http:///job//api/xml?xpath=/mavenModuleSetBuild/result/text%28%29). For info on how to trigger a build have a look at the "Trigger builds remotely" option in your job.
This list is probably not exhaustive.