A common beanstalkd workflow would be to have many workers listening for jobs on a queue/tube, locking that job while they process it, and deleting that job so that no other workers can re-process it. If the job fails (eg. resources are unavailable to complete processing) the job can slip back onto the queue for another worker to pick up the job.
Is this approach possible with ZeroMQ? Eg, using the pub/sub model can multiple subscribers receive the same job and process it at the same time? Would push/pull or req/rep provide a similar setup?
I'm certain ZeroMQ can provide this for you. However keep in mind that ZeroMQ is not really a queue. It's an advanced networking library. Naturally, with the provided primatives, you can do what you describe.
You specific case seems like it could be implemented as a pub/sub system, if you don't mind having the same work done many times over. I recommend reading the ZeroMQ guide and especially chapter 5.
Although I'm certain you can do what you describe with ZeroMQ, I would first search for a queue which does this already.
In application life-cycle management, it's common to have some environments. For example:
DEV -> Staging -> Production
Normally, you would develop in the DEV environment and stage your developments to Staging and Production.
But it's possible to directly modify the PRD environment (to quickly fix a bug, for instance).
How do you call this procedure (the modification of your code in an environment that is not the DEV environment)?
I thought it was called "hotfix" but I see no related search results in Google.
As opposed to your reference entity Environment the entity is, in my opinion, Branch within your SCM.
With this in mind, you are absolutely right: In my experience it was always a Hotfix branch. For planet TFS where I currently reside, this is described in various branching guidelines including this one - which is considered to be among the best (if not THE best).I had similar experiences in a UNIX/ClearCase planet, again with Hotfix branches - they were named as "MaintenanceRelease"-branches. Those contained one or more Hotfixes, occasionally a highly anticipated Feature could be merged into that as well.I wouldn't ever expect to see in any company a "Hotfix"- Environment. 'Hotfixes' address any possible crisis that a customer has experienced and that is per definition pretty vague. So having such an environment, is possibly a Utopia. In one occasion, they had a "BLS" - lab ("Back Level Support") which was used by Support-People to reproduce customer scenarios. Hotfixes provided by Development were deployed in this Lab before release. This is in some extend a "Hotfix" environment - still, beware that this installation costed millions.
Hudson CI server displays stability "weather" which is cool. And it allows one project build to kick off based on the successful build of another. However, how can you make that secondary project dependent additionally on the stability of multiple builds of the first project?
Specifically, project "stable_deploy" needs to only kick off to promote a version to "stable" if project "integrate" with version 8.3.4.1233 has built and tested successfully at least 8 times--in a row. Until then, it's still in integration mode.
IMPORTANT: A significant caveat to this is that a single set of Hudson projects gets used as a "pipeline" to process each new version through to release. So a project may have built successfully 8 times in a rolw but the latest version 8.3.4.1233 may be only the 2 most recent builds. The builds prior to that may be an earlier version.
We're open to completely reorganizing this but the pipeline idea seemed to greatly reduce the amount of manually project creation and deletion. Is there a better way to track version release "pipeline"? In particular, we will have multiple versions in this pipeline simultaneously in the future due to fixes or patches to older versions. We don't see how to do that yet, except to create new pipeline projects for each version which is a real hassle.
Here's some background details:
The TickZoom application has some very complete unit tests some of which simulates real time trading environments. Add to that TickZoom makes elaborate use of parallelization for leveraging multi-core computers. Needless to say, during development of a new version, there can be stability issues during integration testing which get uncovered by running the build and auto tests repeatedly. A version which builds and tests cleanly 8 times in a row without change plus has undergone some real world testing by users can be deemed "stable" and promoted to the stable branch.
Our Hudson projects look like this:
test - Only for testing a build, zero user visibility.
integrate_deploy - Promotes a test project build to integrate branch and makes it
available to public for UA testing.
integrate - Repeatedly builds the integrate branch to determine if it's
stable enough to promote to stable branch. This runs the
builds and test hourly throughout every night.
stable_deploy - Promotes an integrate project build to the stable branch and
makes it public for users who want the latest and greatest.
stable - Builds the stable branch once every night. After 2 weeks of
successful builds (14 builds) it can go to "release candidate".
And so on... it continues with "release candidate" and then "release".
I can see the point of demonstrating stability by having multiple successive builds succeed without error, but I'd suggest a slightly different approach to make things more simple. Rather than trying to aggregate the results of multiple builds to determine whether you promote the latest build to the stable branch, run your tests 8 times against the same build; you can either do this by adding a repeat count parameter to the tests, or just repeat the test steps multiple times in the Hudson job setup.
If the build passes cleanly every time, you could use that as a gateway to send the build to your users for "real world" testing before you promote it to the stable branch.
This has a couple of advantages; it makes the Hudson setup more simple as per your request, and it gives you added confidence in the stability of the build because you're running the tests multiple times against the same code base, rather than against a different code base each time.
The answer is to create a separate pipeline of jobs for each new minor version of the software.
So they'll be like this.
integrate_0.8.3
stable_0.8.3
candidate_0.8.3
release_0.8.3
We will use the Hudson API to generate the jobs for each new version with the script.
The promotion can't be totally automated because other factors than stable builds like user reported errors can delay a version from moving through the pipeline.
sincerely,
Wayne
I guess you have to either implement some solution outside of hudson, that produces trigger files to be used in Hudson or you extend the promotion plugin with your company specific rules.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Why should I do Nightly Builds?
You should do nightly builds to ensure that your codebase stays healthy.
A side effect of doing nightly builds is that it forces the team to create and maintain a fully automated build script. This helps to ensure that your build process is documented and repeatable.
Automated builds are good at finding the following problems:
Somebody checked in something that breaks stuff.
Somebody forgot to check in a necessary file or change.
Your build scripts no longer work.
Your build machine is broken.
Doing this nightly ensures that you catch such problems within 24 hours of when they occur. That is preferable to finding all the problems 24 hours before you are supposed to deliver the software.
You should also, of course, have automated unit tests that are run for each nightly build.
I've personally found continuous integration to be better than nightly builds:
http://en.wikipedia.org/wiki/Continuous_integration
I even use it on one man projects, it's amazing how fast you can expose issues and take care of them right there.
I've been doing build engineering (among other things) for 16 years. I am a strong believer in build-early, build-often, continuous integration. So the first thing I do with a project is establish how it will be built (Java: Ant or Maven; .NET: NAnt or MSBuild) and how it will be managed (Subversion or some other version control). Then I'll add Continuous Integration (CruiseControl or CruiseControl.NET) depending upon the platform, then let the other developers loose.
As the project grows, and the need for reports and documentation grows, eventually the builds will take longer to run. At that point I'll split the builds into continuous builds (run on checkin) that only compile and run unit tests and daily builds that build everything, run all the reports, and build any generated documentation. I may also add a delivery build that tags the repository and does any additional packaging for a customer delivery. I'll use fine-grained build targets to manage the details, so that any developer can build any part of the system -- the Continuous Integration server use the exact same build steps as any developer. Most importantly, we never deliver a build for testing or a customer that wasn't built using the build server.
That's what I do -- here's why I do it (and why you should too):
Suppose you have a typical application, with multiple projects and several developers. While the developers may start with a common, consistent development environment (same OS, same patches, same tools, same compilers), over the course of time their environments will diverge. Some developers will religiously apply all security patches and upgrades, others won't. Some developers will add new (maybe better) tools, others won't. Some will remember to update their complete workspace before building; others will only update the part of the project they're developing. Some developers will add source code and data files to the project, but forget to add them to source control. Others will write unit tests that depend upon specific quirks of their environment. As a consequence, you'll quickly see the ever-popular "Well, it builds/works on my machine" excuses.
By having a separate, stable, consistent, known-good server for building your application, you'll easily discover these sorts of problems, and by running builds from every commit, you'll be able to pinpoint when a problem crept into the system. Even more importantly, because you use a separate server for building and packaging your application, it will always package everything the same way, every time. There is nothing worse than having a developer ship a custom build to a customer, have it work, and then have no idea how to reproduce the customizations.
When I saw this question, first I searched for Joel Spolsky's answer. Bit disappointed, so I planned to add it here.
Hope everyone is aware of Joel Test on Careers.
From his blog on The Joel Test: 12 Steps to Better Code
3. Do you make daily builds?
When you're using source control, sometimes one programmer
accidentally checks in something that breaks the build. For example,
they've added a new source file, and everything compiles fine on their
machine, but they forgot to add the source file to the code
repository. So they lock their machine and go home, oblivious and
happy. But nobody else can work, so they have to go home too, unhappy.
Breaking the build is so bad (and so common) that it helps to make
daily builds, to insure that no breakage goes unnoticed. On large
teams, one good way to insure that breakages are fixed right away is
to do the daily build every afternoon at, say, lunchtime. Everyone
does as many checkins as possible before lunch. When they come back,
the build is done. If it worked, great! Everybody checks out the
latest version of the source and goes on working. If the build failed,
you fix it, but everybody can keep on working with the pre-build,
unbroken version of the source.
On the Excel team we had a rule that whoever broke the build, as their
"punishment", was responsible for babysitting the builds until someone
else broke it. This was a good incentive not to break the build, and a
good way to rotate everyone through the build process so that everyone
learned how it worked.
Though I haven't got an opportunity to make daily builds, I'm a great fan of it.
Still not convinced? Check out the brief here in Daily Builds Are Your Friend!!
You don't actually, what you should be wanting is Continuous Integration and automatic testing (which is a step further than nightly builds).
If you are in any doubt you should read this article by Martin Fowler about Continuous Integration.
To summarize, you want to build and test as early and often as possible to spot errors immediately so they can be fixed while what you were trying to achieve when you caused them is still fresh in your mind.
I'd actually recommend to do builds every time you check in. In other words, I'd recommend setting up a Continuous Integration system.
The advantages of such a system and other details can be found in Fowler's article and on the Wikipedia entry among other places.
In my personal experience, it's a matter of Quality Control: every time code (or tests, which can be seen as a form of requirements) are modified, bugs might be creeping in. To ensure quality you should make a fresh build of the product as it would be shipped and perform all the tests available. The more often this is done, the less likely bugs will be allowed to form a colony. Therefore, daily (nightly) or continuous cycles are preferred.
In addition, whether you restrict access o your project to developers or a larger group of users, a nightly build enables everyone to be on the 'latest version', minimizing the pain of merging their own contributions back into the code.
You want to do builds on a regular schedule in order to catch problems with integration of code between developers. The reason you want to do this nightly, as opposed to weekly or on some longer schedule, is that the longer you wait to discover these kinds of problems, the more difficult it will be to resolve them. The practice of doing a build on every check in (Continuous Integration) is just taking the nightly build process to a logical extreme.
The side benefit of having a repeatable build process is important in the long run as well. If you work on a team where there are multiple projects going on, then at some point you will need to be able to easily recreate an old build, perhaps for creating a patch. :(
The more you can automate the build process, the more time you will save for each subsequent build. It also takes the build process itself off of the critical path of delivering the final product, which should make your manager happy. :)
It also depends on the size and structure of the team(s) working on your project. If there are different teams relying on each others API, it may make a lot of sense to have nightly builds for frequent integration. If you're hacking away with only one or two team mates it may or may not be worth it.
Depending on the complexity of your product continuous integration may or may not be able run a full test suite.
Imagine Cisco testing a router with the literally 1000s of different setups to test. To run a full test suite on some products takes time. Sometimes weeks. So you need builds for different purposes. A nightly build can be the basis for a more thorough test suite.
I think they are very important especially on projects with more than 1 person. The team needs to know ASAP if someone:
checks in a bad file
doesn't check in a file
...
Any build automation is better than no build automation :-)
Personally, I prefer daily builds - that way if the build doesn't work then everyone is around to get it fixed.
In fact, if at all possible then Continuous Integration builds are the way to go (i.e. a build on every check-in) as that minimizes the amount of change between a build and so makes it easy to tell who broke the build and also easy to fix the build.
Well ... I guess it depends a lot on your project, of course. If it's just your hobby project, with no releases, no dependencies, and noone but you submitting code, it might be overkill.
If, on the other hand, there's a team of developers all submitting code, automatic nightly builds will help you ensure the quality of the code in the repository. If someone does something that "breaks the build" for all others, it will quickly be noticed. It is possible to break the build without noticing, for instance by forgetting to add a new file to the repository, and nightly builds in a centralized location will detect these quite quickly.
There are of course other possible benefits, I'm sure others will supply them. :)
Nightly builds are only necessary for significantly large projects (when it takes too long to build it often throughout the day). If you have a small project that does not take long to build you can build it as you get functional pieces of code done so that you know that you did not mess anything up in the procees. However, with larger projects this is not possible so it is important to build the project just so that you know that everything is still in working order
There are several reasons, some will be more applicable than others
If your project is being worked on by two or more people
It's a good way to grab the latest version of code that you aren't working on
A nightly build provides a slice in time of the current state of the code
A nightly build will give you a stable build if you need to send code to people
Nightly builds aren't always necessary - I think they're only really useful on big projects. But if you're on a big project, a nightly build is a good way of checking that everything is working - you can run all your tests (unit tests, integration tests), build all your code - in short, verify that nothing is broken in your project.
If you've got a smaller project your build and test times will be shorter so you can probably afford to do more regular builds.
Nightly builds are ideal for performing static code analysis (see qalab and the projects it collects stats from if you are in java world). Unfortunately, this is something that's rarely done.