I'm leaning toward using Mercurial, coming from Subversion, and I'd like to maintain a centralized workflow like I had with Subversion. Here is what I am thinking:
stable (clone on server)
default (branch)
development (clone on server)
default (branch)
bugs (branch)
developer1 (clone on local machine)
developer2 (clone on local machine)
developer3 (clone on local machine)
feature1 (branch)
developer3 (clone on local machine)
feature2 (branch)
developer1 (clone on local machine)
developer2 (clone on local machine)
As far as branches vs. clones is concerned, does this workflow make sense? Do I have things straight?
Also, the 'stable' clone IS the release. Does it make sense for the 'default' branch to be the release and what all other branches are ultimately merged into?
In Mercurial, branches (created with hg branch inside the same repository) are currently irreversible. Once introduced, you can only remove them from your branch namespace by doing a history rewrite. That's why I would only create real branches if either the project lifetime is low (only a few feature branches) or branches are generic enough to remain current for many years (like "stable", "bugfix" etc.). As far as I know, branches are (as with everything else) created locally without your control, so if anyone decides to use a branch, that branch will also show up in your main repository after a pull/push.
The effect is that - in your structure - after a pull from development into stable you would also get feature1 and feature2 branches merged into stable, being quite useless. Pulling stable into development because you fixed bugs on a stable version will also merge the branch bugs into development. You can try that by creating a stable repository, clone it to development, branch to feature1 in development and pull development into stable (commit some changes between these steps): the branch name will appear with an inactive head in stable although you merged it:
stable $ hg branches
default 4:1c3cd0d1a523
feature2 3:82879465c5f3
feature1 2:5d7480426f21 (inactive)
bugs 1:116860d2d85e (inactive)
I remember that Git is able to delete branches but Mercurial has some development going on to catch up on that topic; I'm not sure this is still up-to-date, so please correct me if I'm wrong.
[Edit] According to Pruning Dead Branches in the Mercurial wiki there are 4 ways to "remove" a branch. The only way to really permanently remove a branch name and not just to close (deactivate) it is by replacing the repository with a cleaned history. [/Edit]
From what I have heard and seen, it's more common to create clones instead of a branches when using Mercurial. I remember having had the same thoughts as you when I switched from SVN to Mercurial last year. The usual approach is different from centralized version control because branching can happen at any time without central control over it, so clones are the preferred way to "branch" to not pollute the branch namespace and to keep developments seperated and flexible (you will always retrieve the full repository if you pull/clone including all branches and if you just want to branch to test something you would have to choose a unique branch name that's permanent and thus will show up for everyone in your project). Although that approach seems like a waste of disk space and you need to keep track of where your branches are (usually inside some project folder on your user account and inside your IDE), it seems to be a much more flexible and practical way of handling branches. (It reads more difficult than when you actually use it yourself.)
Having had a number of smaller projects using Mercurial, this is what worked for our company so far (with only a few active developers per project):
On the server:
projectname-main
development is being pushed to and pulled from there; this is the "authoritative" development "branch" to keep a team's repositories in sync
projectname-stable
if a version gets released/deployed this is where -main gets pushed to; only bugfixes are done in -stable and then pulled back to -main: Following the advice from the Mercurial Guide by Bryan O'Sullivan, bugfixes to versions considered more stable (e.g. previous releases) can usually be pulled back into development branch but changes in the development branch can contain newer, unstable features that should not be pulled into a maintenance branch unless a release (or similar) happens.
Locally on developer machines:
projectname-main is cloned once, being worked on and synchronized by doing a pull (+merge) and push back to the server regularly.
If there's a feature "branch" needed we clone -main (either locally or from server) and name the clone "projectname-featuredescription". For backup purpose or centralized sharing we also create a clone on serverside and push there. If the feature is ready for -main, we pull -featuredescription into our local -main, merge the changes, pull -main from server, merge again and push -main back to the server. If changes happen to -main while we work on -featuredescription we can easily pull from -main and merge the changes (without pushing back to -main yet because we don't want the feature to be there yet).
The downside compared with real branches is that you won't be able to easily recognize where changesets came from after they are merged with one of their parents. But that hasn't been a problem for us yet (commit messages were useful enough or the separation of feature branches was uninteresting once they were merged with their parent repositories).
Thinking of bigger projects I came up with the following scheme that should work (but which I didn't use yet) similar to a centralized version control. It relies on restricted write access to the server side "authoritative" repositories, so only privileged developers (project lead) are allowed to merge and push there. Additionally, there's a CI server as a gatekeeper to another repository -main-tested which is a subset of -main containing only CI approved changesets with a slight delay.
On server:
projectname-main
development; only a few people have write access, changes going to main need to be pulled by them; they are in control of what feature branches are merged
projectname-main-tested
development; noone should write here unless something went wrong, because tested means a Continuous Integration system succeeded in building -main and pushed that revision into -main-tested, so code in here is verified and should not break compilation or tests
projectname-stable
projectname-stable-tested
same strategy for a stable "branch"; as before, we push -main to -stable on releases and work on bugfixes, -stable-tested is CI approved
And of course there need to be multiple repositories somewhere where teams/developers can actually push their daily work to since -main is now for authoritative changes only (of course they can work entirely locally but they have to sync somewhere if they don't like to peer with each other using hg serve or setup their own servers or care for backup).
Another option to reduce commits and repositories/branches would be to use the mq extension to working with a patch queue. However, I find using clones or branches easier, at least for small projects.
What you are doing here is establishing a workflow of merges that should be followed repository (branch) after repository.
Regarding that workflow, I would take the bug branch/repo outside development, because it is usually a branch made after the stable (i.e. released into production) branch that you isolate some bug fixes
stable (clone on server)
default (branch)
bugs (branch)
developer1 (clone on local machine)
developer2 (clone on local machine)
developer3 (clone on local machine)
development (clone on server)
default (branch)
feature1 (branch)
developer3 (clone on local machine)
feature2 (branch)
developer1 (clone on local machine)
developer2 (clone on local machine)
, and not all bug fixes will end up in the development branch, since some of those fixes are tailored only for the current release, while the current development may have already made them obsolete.
Does it make sense for the 'default' branch to be the release and what all other branches are ultimately merged into?
I would also use the first "default" branch (right beneath the stable one) as a consolidation branch, because not every features will end up in that consolidation branch: some of those currently developed features are too complex to be ready in time for the next release.
Much may depend on development workflow in your team. In the last (small) team I worked in (we used svn though) - separate bugs branch finally became redundant, the bugs were fixed in the branch they belong. Also we didn't use a separate stable branch but rather stabilized the development branch for the release.
Related
We have a SOLUTION folder (Mercurial repository) in whitch we have a PROJECT folder, that is also a Mercurial repository.
So two repositories: one - the root(solution) folder and other - a subfolder of the root folder(the project) (yes strange but it is like this)...
Everything worked, but one day someone somehow included the SOLUTION branch into the PROJECT repository... So all the history from the Solution branch was included in parralel with the Project branch into the PROJECT repository....
Now is a little mess in the PROJECT repository... There is need to clean that repository...
Locally it worked by applying the hg strip rev XXS (where XXS was the revision number of the very first node from the freshly added Solution branch in the Project repository).
But it seems there is no strip equivalent on the server?!
Every time we'll pull incoming changes in the Project repository, the "Solution" branch will be re-imported....
Is there a way to manage it on the server side?
Of course the same solution would also work on the server. Thus you need login access to the server itself to execute the same local history operation on it. But for the default setup (publishing server) a push will never remove changesets which are present on a remote location; when you history edit your local repository, the changes will not all propagate: only additions to the graph will, but no deletions.
If such changes to the remote server are expected to be pushed, and this is a regular thing, you might want to look into use of phases and how to setup a non-publishing server, e.g. a server with mutable history: Phases#Publishing_Repository.
Mind that such a workflow also means that every single one of the people with push privilige has to change their default phase to 'draft' instead of 'public' - at least for that project.
kill the server repo. start a fresh one, then from local:
hg push -rev XXR
where XXR is the last rev you want to keep.
We have a dedicated issue tracking (Redmine) machine, which has a Mercurial repository (call it "Redmine repository"). Redmine is set up to use that repository, and as far as I understand, Redmine never makes any changes to that repository. All developers (eventually) push their changes to that repository.
We also have a dedicated production machine, which can execute the code, but is not used to make any changes to the code.
We have two choices:
Set up another Mercurial repository on the production machine (call it "production repository"). When a new production release is approved, pull the changes from the Redmine repository to the production repository, and then update the local working directory to the appropriate revision from the production repository.
Reuse the existing Redmine repository on the production machine designating it a local repository for the Mercurial installation there (the Redmine repository is on the shared drive that can be easily mounted on the production machine). Whenever a new production is approved, update the local working directory to the appropriate revision from the Redmine repository.
With option #2, we get rid of an extra "pull" step (from Redmine repository to production repository), which slightly simplifies the process. But I'm not sure if it's ok that a single repository is used by two Mercurial installations as if it's local.
Any comments on this choice (or any other aspect of this setup) is appreciated!
It sounds like a bad idea. Mercurial does a really good job of keeping reads and writes to its repository atomic, but it has a harder time doing that when the repository is on a shared drive -- even if it's only one local repository using it -- because network shares (especially on Windows) don't always make things atomic that they say they do.
Ideally your repositories (both the working dir and the repository) are local when possible, and you use push/pull to get changesets to/from a network share. If that's not possible then having a single local application using the repo on the remote file system is the best idea.
If you positively want to try having two clones using the same underlying repository check out the ShareExtension, which ships with Mercurial but is for advanced users only.
Instead of trying to piggy-back, why not just put a hook like this in your redmine repository:
[hooks]
changegroup = hg push //production/clone
That will automatically push changesets that arrive in redmine to production.
I am developing a web database that is already in use for about a dozen separate installations, most of which I also manage. Each installation has a fair bit of local configuration and customization. Having just switched to mercurial from svn, I would like to take advantage of its distributed nature to keep track of local modifications. I have set up each installed server as its own repo (and configured apache not to serve the .hg directories).
My difficulty is that the development tree also contains local configuration, and I want to avoid placing every bit of it in an unversioned config file. So, how do I set things up to avoid propagating local configuration to the master repo and to the installed copies?
Example: I have a long config.ini file that should be versioned and distributed. The "clean" version contains placeholders for the database connection parameters, and I don't want the development server's passwords to end up in the repositories for the installed copies. But now and then I'll make changes (e.g., new defaults) that I do need to propagate. There are several files in a similar situation.
The best I could work out so far involves installing mq and turning the local modifications into a patch (two patches, actually, with logically separate changesets). Every time I want to commit a regular changeset to the local repo, I need to pop all patches, commit the modifications, and re-apply the patches. When I'm ready to push to the master repo, I must again pop the patches, push, and re-apply them. This is all convoluted and error-prone.
The only other alternative I can see is to forget about push and only propagate changesets as patches, which seems like an even worse solution. Can someone suggest a better set-up? I can't imagine that this is such an unusual configuration, but I haven't found anything about it.
Edit: After following up on the suggestions here, I'm coming to the conclusion that named branches plus rebase provide a simple and workable solution. I've added a description in the form of my own answer. Please take a look.
From your comments, it looks like you are already familiar with the best practice for dealing with this: version a configuration template, and keep the actual configuration unversioned.
But since you aren't happy with that solution, here is another one you can try:
Mercurial 2.1 introduced the concept of Phases. The phase is changeset metadata marking it as "secret", "draft" or "public". Normally this metadata is used and manipulated automatically by mercurial and its extensions without the user needing to be aware of it.
However, if you made a changeset 1234 which you never want to push to other repositories, you can enforce this by manually marking it as secret like this:
hg phase --force --secret -r 1234
If you then try to push to another repository, it will be ignored with this warning:
pushing to http://example.com/some/other/repository
searching for changes
no changes found (ignored 1 secret changesets)
This solution allows you to
version the local configuration changes
prevent those changes from being pushed accidentally
merge your local changes with other changes which you pull in
The big downside is of course that you cannot push changes which you made on top of this secret changeset (because that would push the secret changeset along). You'll have to rebase any such changes before you can push them.
If the problem with a versioned template and an unversioned local copy is that changes to the template don't make it into the local copies, how about modifying your app to use an unversioned localconfig.ini and fallback to a versioned config.ini for missing parameters. This way new default parameters can be added to config.ini and be propagated into your app.
Having followed up on the suggestions here, I came to the conclusion that named branches plus rebase provide a simple and reliable solution. I've been using the following method for some time now and it works very well. Basically, the history around the local changes is separated into named branches which can be easily rearranged with rebase.
I use a branch local for configuration information. When all my repos support Phases, I'll mark the local branch secret; but the method works without it. local depends on default, but default does not depend on local so it can be pushed independently (with hg push -r default). Here's how it works:
Suppose the main line of development is in the default branch. (You could have more branches; this is for concreteness). There is a master (stable) repo that does not contain passwords etc.:
---o--o--o (default)
In each deployed (non-development) clone, I create a branch local and commit all local state to it.
...o--o--o (default)
\
L--L (local)
Updates from upstream will always be in default. Whenever I pull updates, I merge them into local (n is a sequence of new updates):
...o--o--o--n--n (default)
\ \
L--L--N (local)
The local branch tracks the evolution of default, and I can still return to old configurations if something goes wrong.
On the development server, I start with the same set-up: a local branch with config settings as above. This will never be pushed. But at the tip of local I create a third branch, dev. This is where new development happens.
...o--o (default)
\
L--L (local)
\
d--d--d (dev)
When I am ready to publish some features to the main repository, I first rebase the entire dev branch onto the tip of default:
hg rebase --source "min(branch('dev'))" --dest default --detach
The previous tree becomes:
...o--o--d--d--d (default)
\
L--L (local)
The rebased changesets now belong to branch default. (With feature branches, add --keepbranches to the rebase command to retain the branch name). The new features no longer have any ancestors in local, and I can publish them with push -r default without dragging along the local revisions. (Never merge from local into default; only the other way around). If you forget to say -r default when pushing, no problem: Your push gets rejected since it would add a new head.
On the development server, I merge the rebased revs into local as if I'd just pulled them:
...o--o--d--d--d (default)
\ \
L--L-----N (local)
I can now create a new dev branch on top of local, and continue development.
This has the benefits that I can develop on a version-controlled, configured setup; that I don't need to mess with patches; that previous configuration stages remain in the history (if my webserver stops working after an update, I can update back to a configured version); and that I only rebase once, when I'm ready to publish changes. The rebasing and subsequent merge might lead to conflicts if a revision conflicts with local configuration changes; but if that's going to happen, it's better if they occur when merge facilities can help resolve them.
1 Mercurial have (follow-up to comments) selective (string-based) commit - see Record Extension
2 Local changes inside versioned public files can be easy received with MQ Extension (I do it for site-configs all time). Your headache with MQ
Every time I want to commit a regular changeset to the local repo, I
need to pop all patches, commit the modifications, and re-apply the
patches. When I'm ready to push to the master repo, I must again pop
the patches, push, and re-apply them.
is a result of not polished workflow and (some) misinterpretation. If you want commit without MQ-patches - don't do it by hand. Add alias for commit, which qop -all + commit and use this new command only. And when you push, you may don't worry about MQ-state - you push changesets from repo, not WC state. Local repo can also be protected without alias by pre-commit hook checking content.
3 You can try LocalBranches extension, where your local changes stored inside local branches (and merge branches on changes) - I found this way more troublesome, compared to MQ
I have created a repository on https://bitbucket.org/ and use TortoiseHg to clone it to a folder in my local machine. I am able to add files commit files, but what I find is they never get updated on the Server at Bitbucket. By some fiddling, I found that there is this synch option. What I don't get is, why do I have to press the Synch. If I meant to commit, then it should commit.
Where is it being stored If it's not synched immediately with the remove server.
Note: I am trying out TortoiseHg and Mercurial while having ample experience on SubVersion.
When you commit, you commit to your local repository - i.e. to the .hg directory in the root of your project.. To synch with the remote repository you need to explicitly push your changes. This is how DVCSs work - it's not the same model as SVN.
A key feature of a distributed version control system is that you can make local commits. This means that the new commit does not leave your machine when you press "Commit", it is just stored locally.
This has some direct consequences:
you can work while you are offline, e.g., in a train or in a plane
commits are very fast: in Mercurial, creating a new commit involving n files (normally called a changeset) means appending a few bytes to n + 2 files.
you can change your mind: since you have not shared the new changeset with anybody, you can delete it from your local machine without problem
It also has some indirect consequences:
because commits are fast, people tend to make many more commits. The commits are typically more fine-grained than what you see in a centralized system and this makes it easier to review the changes
because commits are local, it often happens that people do concurrent work. This happens when both you and I make one or more commits based on the same initial version:
[a] --- [b] --- [c] <-- you
/
... [x] --- [y]
\
[r] --- [s] <-- me
The history has then effectively forked since we both started work based on changeset y. For this to work, we must be able to merge the two forks. Because this happens all the time, you'll find that Mercurial has very robust support for merging.
So, by decoupling the creation of a commit with the publishing of a commit, you gain some significant advantages.
It's my first time using a DVCS and also as a lone developer, the first time that I've actually used branches, so maybe I'm missing something here.
I have a remote repository from which I pulled the files and started working. Changes were pushed to the remote repository and of course this simple scenario works fine.
Now that my web application has some stable features, I'd like to start deploying it and so I cloned the remote repository to a new branches/stable directory outside of my working directory for the default branch and used:
hg branch stable
to create a new named branch. I created a bunch of deployment scripts that are needed only by the stable branch and I committed them as needed. Again this worked fine.
Now when I went back to my initial working directory to work on some new features, I found out that Mercurial insists on only ONE head being in the remote repository. In other words, I'd have to merge the two branches (default and stable), adding in the unneeded deployment scripts to my default branch in order to push to the main repository. This could get worse, if I had to make a change to a file in my stable branch in order to deploy.
How do I keep my named branches separate in Mercurial? Do I have to create two separate remote repositories to do so? In which case the named branches lose their value. Am I missing something here?
Use hg push -f to force the creation of a new remote head.
The reason push won't do it by default is that it's trying to remind you to pull and merge in case you forgot. What you don't want to happen is:
You and I check out revision 100 of named branch "X".
You commit locally and push.
I commit locally and push.
Now branch X looks like this in the remote repo:
--(100)--(101)
\
\---------(102)
Which head should a new developer grab if they're checking out the branch? Who knows.
After re reading the section on named branchy development in the Mercurial book, I've concluded that for me personally, the best practice is to have separate shared repositories, one for each branch. I was on the free account at bitbucket.org, so I was trying to force myself to use only one shared repository, which created the problem.
I've bit the bullet and got myself a paid account so that I can keep a separate shared repository for my stable releases.
You wrote:
I found out that Mercurial insists on only ONE head being in the remote repository.
Why do you think this is the case?
From the help for hg push:
By default, push will refuse to run if it detects the result would
increase the number of remote heads. This generally indicates the
the client has forgotten to pull and merge before pushing.
If you know that you are intentionally creating a new head in the remote repository, and this is desirable, use the -f flag.
I've come from git expecting the same thing. Just pushing the top looks like it might be one approach.
hg push -r tip