Is there a method of checking for cyclic dependencies between jobs in Hudson? It is simple to observe if A->B->A but if A->B->C->D->A then its almost impossible manually. Is there a plugin to do this? I think it is key functionality because such a loop between build triggers can slowly kill a server.
Check out the Downstream build view plugin. It's not a cycle detector, but it might help.
I understand the concern, but do you really have a situation where the builds do not fall into a heirarchy such that it makes no sense for build D to trigger build A?
Hudson has cycle detection in the regular downstream trigger setup. If you are using mechanisms outside of the downstream trigger, it's not clear how to detect the cycle.
Related
I encountered this technical issue when developing applications in what-if scenarios using Palantir Workshop. Basically, our users generally make changes in Workshop. When all the changes are complete, users can trigger a build in Code Repositories from Workshop for an execution of a Python transform function. Multiple datasets are taken as inputs to the transform functions. What would be the recommended technique for this integration to happen?
Thank you for your attention!
I tried to use build schedules manager together with "apply scenario". Although it can work, but the biggest con lies in that:
the build pipeline is more manual and ad-hoc instead of automated.
the "apply scenario" in Workshop works against our expectation because the data were expected to not override the ontology objects. However, for the trigger to happen, I have to apply the changes made at the Workshop back to the ontology for a chained trigger of build.
I have an ant project with over 100 modules. I cycle through all modules compile, package, and publish in one build run. However, when one ivy:publish fails (due to random connection issue), the entire build exits.
I would like the build process to continue compile/publish the remaining modules even if one module fails to publish for whatever reason.
Is there some settings in ivy:publish to prevent exiting upon error or some other way to achieve this?
thanks
Since you appear to be using ANT to call multiple sub-builds, then I would submit this is a control loop problem rather that something specific to ivy. In other words you are best advised to ensure each module's build is as stand-alone as you can make them and then in your loop each module's build should succeed or fail.
You have not indicated what your main build file looks like? I would high recommend using the subant task, as this has a "failonerror" flag that will give you your desired behaviour (build will continue on if a module fails).
<subant failonerror="true">
<fileset dir="." includes="**/build.xml" excludes="build.xml"/>
<target name="clean"/>
<target name="build"/>
</subant>
This should be enough to solve your problem. Any build that fails can be manually re-run. In practice this might be difficult since one module failing might cause a subsequent build to fail due to missing dependencies..... You need to judge the risks of this for yourself.
You can even further complicate your solution later, by using an embedded script to run module builds. If you have lots and lots of errors you might want to add some bespoke error handling logic.
Move a ant dir project after the ant or subant task completes
I have a groovy-script which takes about 5 hours to complete (it restarts (delete old and start new) many workflows), and unfortunately there are some workflows which can't get processed and throw an "internal Server error" which ends the groovy call.
All I can do now is to take a look at the logs and restart the groovy script and exclude the problematic workflow-id.
It would be a great performance-boost, if I could catch this "internal server error" in the hac and continue with the next workflow instead of aborting the skript.
I already tried to put it in try/catch, but this doesn't work.
Is there any chance to "ignore" the "internal server error"s - entries of my list to process?
Thanks for any help!
Run the Groovy script natively, not through the HAC. The Groovy/Beanshell consoles are handy for quick prototypes, but running a 5-hr process through a browser interface seems kludgy at best. You have at least a couple options:
Dynamic Beans
Did you know that Spring beans can be implemented using a number of various languages using Dynamic language beans?
Define interfaces for your processes and wire them up to Groovy implementations using the Spring configuration. Since the scripts are interpreted at runtime, you can swap out code without needing to recompile the entire platform.
Now you have the full power of Java, Spring, Groovy, and hybris. Properly sequester each process so that exceptions don't bubble up and crash the entire thing.
This option would be the cleanest way to go, since you'd be integrating the code directly into the project's codebase. And you can keep all your existing [ Groovy | JRuby | Beanshell | ... ] code.
Roll your own
Another thing you might try is examining hybris' Groovy API. I was able to leverage hybris' Beanshell interpreter classes to create my own test harness. It is a simple standalone Eclipse project that allows me to write and run Beanshell within Eclipse, with output to the console. I use it on a daily basis for quick scripting tasks like batch updates, FlexibleSearch queries, etc. I'd imagine you could do the same thing with Groovy. Search the hybris API for the HAC code that interprets the Groovy requests from the browser.
The sky's the limit, but first get out of the browser console for heavy scripting tasks.
My short answer would be: Don't use scripts for time-consuming processes.
Although you mentioned that is not possible to define standard scripts, because Business is working in parallel, I cannot recommend maintaining a live system in this manner.
Integrate that logic into a custom CronJob and add all configurable/dynamic things as properties of said Job.
The benefit of this approach would be
you have a proper logging mechanism (Sysout in HAC Groovy console sux)
you can trace your execution (time consumed, started, stopped, etc.)
can be triggered automatically (CronJob Trigger) or by other instructed user (eg Operations)
you get a more stable workflow as a whole (that is, no need of keeping track of those magic scripts (how do you version them? in the resource folder?))
The downside of this would be indeed, that you need a redeploy.
From my experience, dynamically changed code (Dynamic Beans as an example) works on projects with comparably low complexity, but tends to get messy pretty quickly.
we want to use Hudson for our CI, but our project is made of code coming from different repository. For example:
- org.sourceforce... should be check out from http:/sv/n/rep1.
- org.python.... should be check out from http:/sv/n/rep2.
- com.company.product should be check out from http:/sv/n/rep3.
right now we use an ant script with a get.all target that checkout/update the code from different rep.
So i can create a job that let hudson call our get.all target to fetch out all source code and call a second target to build all. But in that case, how to monitor change in the 3 repositories ?
I'm thinking that I could just not assign any repository in the job configuration and schedule the job to fethc/build at regular time interval, but i feel that i'll miss the idea of CI if build can't be trigger from commit/repository change.
what would be the best way to do ? is there a way to configure project dependencies in hudson ?
I haven't poked at the innards of our Hudson installation too much, but it there is a button under Source Code Management that says "Add more locations..." (if that isn't the default out-of-the-box configuration, let me know and I will dig deeper).
Most of our Hudson builds require at least dozen different SVN repos to be checked out, and Hudson monitors them all automatically. We then have the Build steps invoke ant in the correct order to build of the dependencies.
I assume you're using subversion. If not, then please ignore.
Subversion, at least the newer version of it, supports a concept called 'Externals.'
An external is an API, alternate project, dependency, or whatnot that does not reside in YOUR project repository.
see:http://svnbook.red-bean.com/en/1.1/ch07s04.html
I am new at this and I was wondering how I can setup that I save the artifacts, only if less than 90% of the tests have passed.
Any idea how I can do this?
thanks
This is not currently possible with Hudson. What is the motivation to avoid archiving artifacts on every build?
How about a rather simple workaround. You create a post build step (or additional build step) that calls your tests from the command line. Be sure to capture all errors so Hudson don't count it as a failure. Than you evaluate your condition and set the error level accordingly. In addition you need to save reports (probably outside hudson) before you set the error level, so they are available even or only when the build fails.
My assumption here is, that it is OK, not to run the tests when building the app fails. However, you can separate the building and testing in two jobs. See here.