.rec file usage incase if we want to run graph form specific phase - ab-initio

In abinitio, job has failed in specific phase for example phase 4, and we fixed the issue and rollback the .rec file, if we want to run the graph from phase 4 only, how to do it?

You can restart a job, and it will continue from the last checkpoint, which in a batch graph is a checkpointed phase break. If the graph failed in phase 4, restarting will continue from the beginning of phase 4, assuming you set checkpointed phases.
Your question is confusing: you said the job failed in phase 4, which means it wouldn't have continued to a later phase. If you mean it didn't fail in phase 4 and it continued to a later phase, but you discovered an error that may have produced incorrect results (such as the wrong data in some input to that phase), unfortunately you're out of luck -- the job will have to be re-run from the beginning.
There's no way to roll back to a previous phase, because that work is already done and committed. The system cleaned up the files etc. that allow restart of the phase when it completed, before setting up and starting the next phase.

Related

Prevent scheduled workflow run if no new commits were added since previous run

GitHub actions workflows can be triggered: (1) each push or (2) on a schedule (as well as in a number of different ways). I am looking for a combination of these two: run on a schedule, but skip the run if no new commits have been pushed since the last run. Ideally, this should work even if the last run was manually triggered.
Is this possible, and if yes, how?

Checking for Out of Sync Folders

Every day when I start my Visual Studio, my RTC (Rational Team Concert) starts with a lengthy process of 'Checking for Out Of Sync Folders'.
I am not sure what is causing this and can some one help how to remedy the same?
As mentioned here
this error message should be improved. It should say something like:
"Files in your sandbox are out of sync with your repository workspace".
The official documentation includes:
A sandbox and repository workspace can become out-of-sync when a network failure or manual cancelation interrupts an update operation.
When this happens, the next attempt to perform an operation (such as check in, accept, suspend, or resume) that updates both a sandbox and repository workspace generates a warning that the sandbox and repository workspace are out-of-sync.
(Image from "Loading Content from a Jazz Source Control Repository in Rational Team Concert 2.0, 3.0 and 3.0.1")
If any of the out-of-sync projects contain unresolved local changes in the sandbox, by default, the reload operation preserves these changes
As I advised here, if you don't have any pending changes, you can
try first a "refresh sandbox". Then close everything, re-open Visual Studio and check you don't have any more "checking out" step.
If the first workaround is not enough, "reload projects out of sync". (close and restart to see if the issue persists)

Enforcing one build for one commit in Jenkins/Hudson

We use Jenkins for doing incremental builds of our project on each commit to the SCM. We would like to get separate builds for every single commit. However, the naive approach (setup SCM and use post-commit hooks to trigger a build) exhibits problem in the following scenario:
Build is triggered.
While build takes place (it can take up to several minutes) two separate commits to the SCM are made by two developers.
One new build is triggered. It receives changes from both of the commits, made during previous build.
This "race condition" complicates finding which one of the commits has broken the build/introduced warnings.
The currently employed solution is checking for changes in one job ("scheduler job") and triggering another job to do the actual checkout and build.
Are there any proper solutions to this problem?
Not yet, there's a Feature Request covering this kind of build, but it's still open: Issue 673
Maybe it misses the point, but we have a pretty nice build process running here.
We use git as our source control system
We use gerrit as our review tool
We use the gerrit trigger to run builds on our jenkins server
We check for changes on the develop branch to run jenkins when a changeset is merged
In short the ideal developer day is like this
developer 1 stars a new branch to do his changes, based on our main develop branch
The developer 1 commits as often as he likes
developer 1 thinks he finished his job, he combines his changes into one change and pushes it to gerrit
A new gerrit change is created and jenkins tries to build exactly this change
When there are no errors during the build, a review is made on this change
When the review is submited, the changeset is merged into the develop branch of the main repository (No change is merged into the develop branch, without review)
Jenkins builds the merged version to be sure, that there are no merge errors
no developer 2 joins the party and tries to do some work
the process is exactly the same both start working, in there branches. Developer 1 is faster and his changes are merged into the develop branch. Now, before developer 2 can publish his changes he has to rebase his changes on top of the changes made by developer 1.
So we are sure, that the build process is triggered for every change made to our codebase.
We use this for our C# development - on windows not on linux
I don't believe what you'd like to do is possible. The "quiet period" mentioned by Daniel Kutik is actually used to tell Hudson/Jenkins how much time to wait, in order to allow other commits to the same project to be picked up. Meaning -- if you set this value to 60 seconds and you've made a commit, it will wait for a minute before starting a new build, allowing time for other commits to be picked up as well (during that one minute).
If you use the rule "NO COMMIT on a broken build‏" and take it to it's logical conclusion, you actually end up with "No commit on a broken build or a build in progress", in which case the problem you describe goes away.
Let me explain. If you have two developers working on the same project and both of them try to commit (or push if you're using DVCS). One of them is going to succeed and and they other will fail and need to update before the commit.
The developer who had to do the update knows from the commit history, that the other commit was recent and thus a build in progress (even if it hasn't checked out yet). They don't know if that build is broken yet of not, so the only safe option is to wait and see.
The only thing that would stop you from using the above approach is if the build takes so long, in which case you might find that your developers never get a chance to commit (it's always building). This is then a driver to split up your build into a pipeline of multiple steps, so that the Post Commit job takes no more than 5 minutes, but is ideally 1 minute.
I think what might help, is to set the Quiet Period (Jenkins > Manage Jenkins > Configure System) to 0 and the SCM Polling to a very short time. But even during that short interval there could be two commits. As of now Jenkins does not have the feature to split build into single builds on multiple SVN commit.
Here is a tutorial about that topic: Quiet Period Feature.
As pointed out by someone in Issue 673 you could try starting a parametrized build with the parameter being the actual git commit you want to build. This in combination with a VCS commit hook.

Hudson build order not honoring dependencies on simultaneous checkin

So this is a similar question:
Triggering upstream project builds before downstream project
But I don't want the all-or-nothing behavior that guy is asking for, I just want hudson to build the projects in the right order so we don't get false alarm failed builds.
We have two projects, one depending on the other. If we do a simultaneous checkin to both projects (where the dependent project will fail without the dependency being built first), Hudson seems to pick one at random, so sometimes we get a failed build, then the other project builds successfully, then the retry on the other project succeeds.
Hudson is smart enough to figure out from the maven pom's what is upstream and downstream, and even knows to build the downstream stuff when the upstream changes, but it doesn't know to build the upstream stuff before the downstream stuff if they've both changed.
Is there a configuration setting I'm missing? "Build after other projects are built" appears to just be a manual version of what it will already do for upstream projects.
Under Advanced Project Options you have the quiet period. Set for your first build the quiet period to 5 seconds and for the second to 2 minutes. This should do the trick. You can also try with 5 and 10 seconds, I just choose 5 and 120 since Hudson will check for changes not more often than every minute. I don't know how the svn check is implemented. So 2 minutes will ensure that your first project will at least be checked once before the second build starts. (assumption: both jobs check every minute for SVN changes)
You also need to make sure that both jobs are not running at the same time. So I would use Block build when upstream project is building (also advanced options) to ensure that they build not at the same time. You can also try only this option first, may be this option is already good enough.
If both projects belong to the same maven parent project, then you need only one hudson job for this maven parent project. -- And you don't need any up- or downstream dependencies.
I am facing the same issue. Unfortunately it seems to be a known bug that the Block build when upstream project is building option does not work when the hudson server is configured with multiple executors/nodes.
http://issues.hudson-ci.org/browse/HUDSON-5125
A workaround could be using the Naginator Plugin which can reschedule a build after a build failure.

hudson detecting failure when builds are succeeding

our hudson build is succeeding, but hudson is somehow reporting a failure.
what is the criteria that hudson uses for determining failure and success?
BTW, Our build updates a .xml file with the results of the test. I've checked, and it appears that hudson is correctly updating this file (The modification time matches)
Thanks
Click on the link for the build that failed (#123 for instance), and then go to Console Output link on the left. That log will tell you what step of the build failed.
Note that just because the build of the software succeeded, doesn't mean the entire build process succeeded. You might have a final step that, for instance, deletes some intermediate, unnecessary files. If one of those files was in use and couldn't be deleted (causing the batch file to return an error), then the step failed, and as a result the entire build is marked as a failure.