What do you call a modification that is made on a Environment that is not DEV? - terminology

In application life-cycle management, it's common to have some environments. For example:
DEV -> Staging -> Production
Normally, you would develop in the DEV environment and stage your developments to Staging and Production.
But it's possible to directly modify the PRD environment (to quickly fix a bug, for instance).
How do you call this procedure (the modification of your code in an environment that is not the DEV environment)?
I thought it was called "hotfix" but I see no related search results in Google.

As opposed to your reference entity Environment the entity is, in my opinion, Branch within your SCM.
With this in mind, you are absolutely right: In my experience it was always a Hotfix branch. For planet TFS where I currently reside, this is described in various branching guidelines including this one - which is considered to be among the best (if not THE best).I had similar experiences in a UNIX/ClearCase planet, again with Hotfix branches - they were named as "MaintenanceRelease"-branches. Those contained one or more Hotfixes, occasionally a highly anticipated Feature could be merged into that as well.I wouldn't ever expect to see in any company a "Hotfix"- Environment. 'Hotfixes' address any possible crisis that a customer has experienced and that is per definition pretty vague. So having such an environment, is possibly a Utopia. In one occasion, they had a "BLS" - lab ("Back Level Support") which was used by Support-People to reproduce customer scenarios. Hotfixes provided by Development were deployed in this Lab before release. This is in some extend a "Hotfix" environment - still, beware that this installation costed millions.

Related

Why shouldn't developers be able to deploy directly to production?

I have always worked in environments where developers had to go through a process of working with Network Operations (server guys) to deploy stuff from development/test to production.
I recently started a job where developers can go directly from their machines to production with no middle man. Are there reasons that developers should not be able to do this?
What I have so far:
You are more careful about deploying
something if it has to go through
someone else. As a young programmer
it sometimes took me several tries to
get a working deployment out. Since
the NetOps guys were pissed I learned
to make sure it was right the first
time.
There is some accountability if something goes wrong and more than one person knows what's going on. Boss: "The site just went down!", Everyone else in the office: "Abe just did a deploy, it's his fault!"
When someones sole responsibility is the production server, it's less likely that they will do something stupid.
There will (hopefully) be more information on the deploy and roll back capabilities. Logs, backups that can be rolled back to, automated features...
Are there any other good reasons? Am I just being a control freak?
A few that come to mind (there may be overlap with yours):
A developer can tweak something until it works. This shouldn't be done in Production. If that developer is hit by a bus the next day, nobody will know the system. A documented and repeatable-by-someone-else deployment process helps ensure that such business knowledge is captured.
As a developer, I don't want that kind of access. If something fails, it's far less likely that it's my fault. I'll come in and help, we're all on the same team after all, but I like to know that someone else had to review my work and agree with it. (The same is true of my DB delta scripts. I want a more qualified DBA whose sole responsibility is to the database to review my work. If all they do is run what I tell them when I tell them, then that's essentially no different than giving me direct access. It's just slower.)
Developers often make quick fixes to simple things. We all know that it's often not as cut and dry as the developer thought, and that quick fix either didn't fix it or broke something else. No matter how small the change/fix, there should still be a QA process. (For some shops where uptime isn't so critical that QA process can actually be Production, but that's a rare exception. It shouldn't be that way, from a purist perspective, but as with anything it's a risk/reward ratio. If the risk is low (as in a Production failure doesn't incur much penalty if any at all) and the cost of QA is comparatively high, then it's fine.)
Regulatory needs. PCI compliance, etc. often mandates clear separation of tasks between jobs. This is often misconstrued as "developers can't access production" and treated very black and white. But it does mean that developers should be able to access only what they need in order to do their job. If you don't need production data, and that data is sensitive, you shouldn't have it.
Because many developers are congenitally incapable of thinking they make mistakes - the same reason good dev groups have dedicated test teams.
"I'll just make this small config change in Prod, that won't break anything."
OOP developers should understand separation of responsibilities, I would have thought. You break it, you own it. Avoid the problem with a separate Ops team.
In some environments (e.g. finance) large sums of money (and sometimes the law) are also at risk from ill-advised or ill-intentioned changes in an uncontrolled Production environment.
In small teams, I can see a case for developers having production access, but that has to be controlled and auditable so that you ALWAYS know what is in Production. In that sense, it does not matter who pushes the deploy and rollback buttons, but that they exist and are the only way to change the Production environment.
I for one do not want that to be a large part of my job. You may find that your own devs agree once they see how much more time they can spend coding.
The main reason is because allowing a dev to deploy directly to production cuts out the QA process. Which introduces risk. Which management types don't like.
So another bullet point for you is massive increase in RISK.
Security - By having one gatekeeper (with a backup) only one person is accessing production data and servers. This means fewer access points.
Ease of management - You don't need to create as many accounts in your production environment to keep track of - or even worse, share one account among many. (assuming your prod environment is separated from your dev environment.
Practice makes perfect - one person who builds a routine and sticks to it has fewer chance for screw ups.
If there is a way to make a mistake it will eventually happen. Law of big numbers. It is unreasonable to put the burden on developers to be perfect, if you also want them to be productive.
Change management
Accountability
QA
One button builds / deployment
Unit tests
Code stability - suppose you push, right when someone else just checked in code?
Now, the amount of overhead / difficulty to change should be directly related to your up time requirements. Restated: the more costly downtime is, the more you should invest in preventing downtime.
By deploying directly to the production environment, there is a good chance that no QA was involved (i.e. nothing was tested).
Because there needs to be ONE person you can go to who knows what's deployed on the site. If every developer can deploy, you don't know who deployed what when somebody notices something wrong.
SOC-1 compliance may (unnecessarily) suggest or require that the developer be a separate person than the one deploying to production so that controls are in place to prevent malicious intent.

How software configuration management help in improving project management?

Which best practice involved in software configuration management to help in improving project management?
It mitigates a whole bunch of project risks, including:
The risk of making a change which is found to be incorrect: SCM software allows you to see the change and roll back
The risk that you could lose all your source code (much less likely since everyone has a copy on their machine)
The risk that two people could make incompatible changes: good SCM will allow you to merge the two and get the best of both worlds.
Also, these days SCM is so easy and cheap to set up that embarking on a software project without it is madness.
Assuming you're really focused on best practices, I can outline a couple of possibilities.
Using the best (SCM) tools available. While this might depend on your specific goals and constraints, Mercurial and Git are hard to beat (distributed, excellent branch/merge capabilities, multiplatform, FOSS, really fast, flexible workflow etc.).
You can analyse the data in your source repository using a tool like PanBI (disclaimer: I wrote it). A short screencast shows off what you can learn from repository contents analysis. In brief:
general work dynamics on the codebase
breakdown per developer
daily work dynamics
type of changes to the codebase (add/remove/modify), part of the source tree
...and much more.
Connecting an SCM tool with an issue tracker can also add value. Developers place issue ID's in commit messages, e.g. "[#1455]: improved performance a bit" and the issue tracker relates the issue with the changes in the code repository. From a project management perspective, this allows you to loosely track the time spent on individual issues, project phases or complete projects. A simple commit hook refusing commits without an issue number can go a long way in ensuring data consistency. Such "measured" data can be compared to the baseline to understand what's working and what isn't.
Building official releases on a build server from a tagged source version pulled from the repository could also be considered beneficial from a project management perspective because it's a way to control quality. Building software this way detaches the build process from any dependencies or specifics of developer machine environments, provide reproducibility, allows robust automatic/semiautomatic publishing of the build etc., i.e. streamlines and shields parts of the deployment process.
These are just some of the possibilities, it doesn't stop here.

Can Hudson branch promotion get based on project stability?

Hudson CI server displays stability "weather" which is cool. And it allows one project build to kick off based on the successful build of another. However, how can you make that secondary project dependent additionally on the stability of multiple builds of the first project?
Specifically, project "stable_deploy" needs to only kick off to promote a version to "stable" if project "integrate" with version 8.3.4.1233 has built and tested successfully at least 8 times--in a row. Until then, it's still in integration mode.
IMPORTANT: A significant caveat to this is that a single set of Hudson projects gets used as a "pipeline" to process each new version through to release. So a project may have built successfully 8 times in a rolw but the latest version 8.3.4.1233 may be only the 2 most recent builds. The builds prior to that may be an earlier version.
We're open to completely reorganizing this but the pipeline idea seemed to greatly reduce the amount of manually project creation and deletion. Is there a better way to track version release "pipeline"? In particular, we will have multiple versions in this pipeline simultaneously in the future due to fixes or patches to older versions. We don't see how to do that yet, except to create new pipeline projects for each version which is a real hassle.
Here's some background details:
The TickZoom application has some very complete unit tests some of which simulates real time trading environments. Add to that TickZoom makes elaborate use of parallelization for leveraging multi-core computers. Needless to say, during development of a new version, there can be stability issues during integration testing which get uncovered by running the build and auto tests repeatedly. A version which builds and tests cleanly 8 times in a row without change plus has undergone some real world testing by users can be deemed "stable" and promoted to the stable branch.
Our Hudson projects look like this:
test - Only for testing a build, zero user visibility.
integrate_deploy - Promotes a test project build to integrate branch and makes it
available to public for UA testing.
integrate - Repeatedly builds the integrate branch to determine if it's
stable enough to promote to stable branch. This runs the
builds and test hourly throughout every night.
stable_deploy - Promotes an integrate project build to the stable branch and
makes it public for users who want the latest and greatest.
stable - Builds the stable branch once every night. After 2 weeks of
successful builds (14 builds) it can go to "release candidate".
And so on... it continues with "release candidate" and then "release".
I can see the point of demonstrating stability by having multiple successive builds succeed without error, but I'd suggest a slightly different approach to make things more simple. Rather than trying to aggregate the results of multiple builds to determine whether you promote the latest build to the stable branch, run your tests 8 times against the same build; you can either do this by adding a repeat count parameter to the tests, or just repeat the test steps multiple times in the Hudson job setup.
If the build passes cleanly every time, you could use that as a gateway to send the build to your users for "real world" testing before you promote it to the stable branch.
This has a couple of advantages; it makes the Hudson setup more simple as per your request, and it gives you added confidence in the stability of the build because you're running the tests multiple times against the same code base, rather than against a different code base each time.
The answer is to create a separate pipeline of jobs for each new minor version of the software.
So they'll be like this.
integrate_0.8.3
stable_0.8.3
candidate_0.8.3
release_0.8.3
We will use the Hudson API to generate the jobs for each new version with the script.
The promotion can't be totally automated because other factors than stable builds like user reported errors can delay a version from moving through the pipeline.
sincerely,
Wayne
I guess you have to either implement some solution outside of hudson, that produces trigger files to be used in Hudson or you extend the promotion plugin with your company specific rules.

Project / code release strategy

Context: I work at a small software company that has traditionally done research-type work, and does not have much experience in the commercial space. We are now trying to push into the commercial world. Due to our origins in research we are used to a very rapid development cycle and very little structure in terms of maintaining proper versions of projects.
Problem: The lack of structure is now proving to be somewhat of a hindrance, as every developer has a slightly different view of the code base. A problem one developer discovers is not reproducible by another developer, and problems found in one build may disappear in the next (or worse, new problems may appear). This makes for a very frustrating experience for someone who is responsible for integrating all the projects and ensuring quality and performance standards are met - i.e. myself.
Potential solution: Personally I am convinced we need to enforce better structure via fixed version numbers and regular releases. It should be self-evident how proper versioning would help with many of our problems, but of course it is not without problems - developers need to do extra work to perform and test releases, and will no longer be able to use the latest versions of everything.
Question: To come to a point - what sorts of strategies do you recommend for ensuring the process and effort required for releases occurs as smoothly as possible? We are using git for version control, maven for our build system, and we have bug tracking and continuous integration systems running, so I believe the tools are there. I am simply unsure about what a proper release process should look like.
You have the big three in place: version control, one-click build via Maven and your continuous build server, and bug tracking. It sounds like you guys are gravitating towards Agile methodologies, and so you ought to be trying to keep the trunk version of your product in a near deliverable state at all times.
When you decide to make your first release, create a branch off of your trunk version for that release. Decide on a labelling scheme and be sure to label the branch version. For example, your first release could be 1.0.4530, where the 1 means first version, the 0 means it's the first release candidate, and the 4530 is the version control change number. You test this release branch and fix important bugs on it. After a while you issue another release candidate, say 1.1.4807. This process iterates a couple more times (say), your release becomes good enough, and you ship version 1.3.5167.
Meanwhile, your new development occurs only in the trunk version, and from time to time you'll need to merge bug fixes from the 1.x release branch back to the trunk. Later, you'll split off a 2.x branch from the trunk to repeat the process for your second release. You'll generally have several active branches (plus the trunk), with development limited to the trunk and each branch kept pristine and independent from development.
You guys will get the hang of things and your developer coordination problems will become less frequent. But these problems are nearly all going to be limited to the trunk, not the release branches.
A problem one developer discovers is
not reproducible by another developer,
and problems found in one build may
disappear in the next (or worse, new
problems may appear). This makes for a
very frustrating experience for
someone who is responsible for
integrating all the projects and
ensuring quality and performance
standards are met - i.e. myself.
Potential solution: Personally I am
convinced we need to enforce better
structure via fixed version numbers
and regular releases.
I don't think you need to have very frequent releases just to coordinate internally. You can do that through version control. Just have people talk about specific git revisions when reporting issues. Also note that you will have to coordinate any external dependencies/libraries too. Some kind of vendor branches could help with this.
It sound like the developers need to use "test branches" and respect the "stable/production branch" a little bit more.
Sell in the concept of "do your wild west stuff in this branch", and when you are happy with the results then you merge it into this "boring stable production branch"....
(or something like that)
There are books written about the general topic; Amazon search even returns three titles for specialized "version control with git."
I think you will benefit from defining a canonical view of the code base. Call it Test. A problem is a problem if it appears in Test. If a problem does not appear in some developer's view, it is up to that developer to figure out what is the important difference; and likewise for a problem that appears in a developer's view, but not in Test.
One convention is for Test to be re-built from sources on a nightly basis. A more strenuous convention is for Test to be re-built upon every update. If your team is small (five or fewer) and not dispersed over great distances or multiple timezones, a reasonable first approximation is to make Test a git workspace on a server upon which your toolchain has been installed along with some cron jobs so that this workspace is updated and rebuilt every night (usually).

Nightly Builds: Why should I do it? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Why should I do Nightly Builds?
You should do nightly builds to ensure that your codebase stays healthy.
A side effect of doing nightly builds is that it forces the team to create and maintain a fully automated build script. This helps to ensure that your build process is documented and repeatable.
Automated builds are good at finding the following problems:
Somebody checked in something that breaks stuff.
Somebody forgot to check in a necessary file or change.
Your build scripts no longer work.
Your build machine is broken.
Doing this nightly ensures that you catch such problems within 24 hours of when they occur. That is preferable to finding all the problems 24 hours before you are supposed to deliver the software.
You should also, of course, have automated unit tests that are run for each nightly build.
I've personally found continuous integration to be better than nightly builds:
http://en.wikipedia.org/wiki/Continuous_integration
I even use it on one man projects, it's amazing how fast you can expose issues and take care of them right there.
I've been doing build engineering (among other things) for 16 years. I am a strong believer in build-early, build-often, continuous integration. So the first thing I do with a project is establish how it will be built (Java: Ant or Maven; .NET: NAnt or MSBuild) and how it will be managed (Subversion or some other version control). Then I'll add Continuous Integration (CruiseControl or CruiseControl.NET) depending upon the platform, then let the other developers loose.
As the project grows, and the need for reports and documentation grows, eventually the builds will take longer to run. At that point I'll split the builds into continuous builds (run on checkin) that only compile and run unit tests and daily builds that build everything, run all the reports, and build any generated documentation. I may also add a delivery build that tags the repository and does any additional packaging for a customer delivery. I'll use fine-grained build targets to manage the details, so that any developer can build any part of the system -- the Continuous Integration server use the exact same build steps as any developer. Most importantly, we never deliver a build for testing or a customer that wasn't built using the build server.
That's what I do -- here's why I do it (and why you should too):
Suppose you have a typical application, with multiple projects and several developers. While the developers may start with a common, consistent development environment (same OS, same patches, same tools, same compilers), over the course of time their environments will diverge. Some developers will religiously apply all security patches and upgrades, others won't. Some developers will add new (maybe better) tools, others won't. Some will remember to update their complete workspace before building; others will only update the part of the project they're developing. Some developers will add source code and data files to the project, but forget to add them to source control. Others will write unit tests that depend upon specific quirks of their environment. As a consequence, you'll quickly see the ever-popular "Well, it builds/works on my machine" excuses.
By having a separate, stable, consistent, known-good server for building your application, you'll easily discover these sorts of problems, and by running builds from every commit, you'll be able to pinpoint when a problem crept into the system. Even more importantly, because you use a separate server for building and packaging your application, it will always package everything the same way, every time. There is nothing worse than having a developer ship a custom build to a customer, have it work, and then have no idea how to reproduce the customizations.
When I saw this question, first I searched for Joel Spolsky's answer. Bit disappointed, so I planned to add it here.
Hope everyone is aware of Joel Test on Careers.
From his blog on The Joel Test: 12 Steps to Better Code
3. Do you make daily builds?
When you're using source control, sometimes one programmer
accidentally checks in something that breaks the build. For example,
they've added a new source file, and everything compiles fine on their
machine, but they forgot to add the source file to the code
repository. So they lock their machine and go home, oblivious and
happy. But nobody else can work, so they have to go home too, unhappy.
Breaking the build is so bad (and so common) that it helps to make
daily builds, to insure that no breakage goes unnoticed. On large
teams, one good way to insure that breakages are fixed right away is
to do the daily build every afternoon at, say, lunchtime. Everyone
does as many checkins as possible before lunch. When they come back,
the build is done. If it worked, great! Everybody checks out the
latest version of the source and goes on working. If the build failed,
you fix it, but everybody can keep on working with the pre-build,
unbroken version of the source.
On the Excel team we had a rule that whoever broke the build, as their
"punishment", was responsible for babysitting the builds until someone
else broke it. This was a good incentive not to break the build, and a
good way to rotate everyone through the build process so that everyone
learned how it worked.
Though I haven't got an opportunity to make daily builds, I'm a great fan of it.
Still not convinced? Check out the brief here in Daily Builds Are Your Friend!!
You don't actually, what you should be wanting is Continuous Integration and automatic testing (which is a step further than nightly builds).
If you are in any doubt you should read this article by Martin Fowler about Continuous Integration.
To summarize, you want to build and test as early and often as possible to spot errors immediately so they can be fixed while what you were trying to achieve when you caused them is still fresh in your mind.
I'd actually recommend to do builds every time you check in. In other words, I'd recommend setting up a Continuous Integration system.
The advantages of such a system and other details can be found in Fowler's article and on the Wikipedia entry among other places.
In my personal experience, it's a matter of Quality Control: every time code (or tests, which can be seen as a form of requirements) are modified, bugs might be creeping in. To ensure quality you should make a fresh build of the product as it would be shipped and perform all the tests available. The more often this is done, the less likely bugs will be allowed to form a colony. Therefore, daily (nightly) or continuous cycles are preferred.
In addition, whether you restrict access o your project to developers or a larger group of users, a nightly build enables everyone to be on the 'latest version', minimizing the pain of merging their own contributions back into the code.
You want to do builds on a regular schedule in order to catch problems with integration of code between developers. The reason you want to do this nightly, as opposed to weekly or on some longer schedule, is that the longer you wait to discover these kinds of problems, the more difficult it will be to resolve them. The practice of doing a build on every check in (Continuous Integration) is just taking the nightly build process to a logical extreme.
The side benefit of having a repeatable build process is important in the long run as well. If you work on a team where there are multiple projects going on, then at some point you will need to be able to easily recreate an old build, perhaps for creating a patch. :(
The more you can automate the build process, the more time you will save for each subsequent build. It also takes the build process itself off of the critical path of delivering the final product, which should make your manager happy. :)
It also depends on the size and structure of the team(s) working on your project. If there are different teams relying on each others API, it may make a lot of sense to have nightly builds for frequent integration. If you're hacking away with only one or two team mates it may or may not be worth it.
Depending on the complexity of your product continuous integration may or may not be able run a full test suite.
Imagine Cisco testing a router with the literally 1000s of different setups to test. To run a full test suite on some products takes time. Sometimes weeks. So you need builds for different purposes. A nightly build can be the basis for a more thorough test suite.
I think they are very important especially on projects with more than 1 person. The team needs to know ASAP if someone:
checks in a bad file
doesn't check in a file
...
Any build automation is better than no build automation :-)
Personally, I prefer daily builds - that way if the build doesn't work then everyone is around to get it fixed.
In fact, if at all possible then Continuous Integration builds are the way to go (i.e. a build on every check-in) as that minimizes the amount of change between a build and so makes it easy to tell who broke the build and also easy to fix the build.
Well ... I guess it depends a lot on your project, of course. If it's just your hobby project, with no releases, no dependencies, and noone but you submitting code, it might be overkill.
If, on the other hand, there's a team of developers all submitting code, automatic nightly builds will help you ensure the quality of the code in the repository. If someone does something that "breaks the build" for all others, it will quickly be noticed. It is possible to break the build without noticing, for instance by forgetting to add a new file to the repository, and nightly builds in a centralized location will detect these quite quickly.
There are of course other possible benefits, I'm sure others will supply them. :)
Nightly builds are only necessary for significantly large projects (when it takes too long to build it often throughout the day). If you have a small project that does not take long to build you can build it as you get functional pieces of code done so that you know that you did not mess anything up in the procees. However, with larger projects this is not possible so it is important to build the project just so that you know that everything is still in working order
There are several reasons, some will be more applicable than others
If your project is being worked on by two or more people
It's a good way to grab the latest version of code that you aren't working on
A nightly build provides a slice in time of the current state of the code
A nightly build will give you a stable build if you need to send code to people
Nightly builds aren't always necessary - I think they're only really useful on big projects. But if you're on a big project, a nightly build is a good way of checking that everything is working - you can run all your tests (unit tests, integration tests), build all your code - in short, verify that nothing is broken in your project.
If you've got a smaller project your build and test times will be shorter so you can probably afford to do more regular builds.
Nightly builds are ideal for performing static code analysis (see qalab and the projects it collects stats from if you are in java world). Unfortunately, this is something that's rarely done.