So far I've resisted having a central repository that I push/pull the changes from my local repository to--I just develop straight to the central repository. I'm the only developer so there's nobody else to affect, plus the central repository is local so speed isn't an issue.
My local repository setup is quite complicated in itself, multiple clones with multiple named branches that I either pull or revert between for selective file sharing amongst many variations. Doing this extra step of pushing to a central repository could end up confusing me, plus it's another step in the workflow possibly not not needed.
This is my first time using a DVCS (Mercurial), and my first time developing solo with a VCS. I'm trying to get an idea if I might regret going down this path over developing in one repository and pushing to a central one.
Are their killer reasons for not doing what I'm doing?
The main killer reason for not doing what you're currently doing is that you already consider it as complex and that, whatever "central repository" you're talking about would bring confusion to you.
One of the big purpose of a VCS and that should guide your choice for one against the other is that it helps your development flow. As soon as your VCS start to be a global slow down, it's a sign that there is an issue, either with the choice of VCS, either with how you're using it.
I would suggest you to read questions and blog posts related to vendor branches as it seems that you'r trying to achieve such a thing with your local set of clones.
Related
I need to collaborate on a Mercurial repository (let's call it "foo") with some people who are novices at version control in general, Mercurial in particular.
I am trying to come up with a workflow that will enable us to use Mercurial without a lot of extra effort on either their end (confusion) or my end (cleanup).
My concern is that as novices I need to expect them to make errors, and I need to allow them to do so in a controlled way, otherwise they won't use the tool at all because they're too scared. But I don't want a bad change to pollute the repository unnecessarily.
I do not expect them to be able to merge properly or to use the mq extension. This is not
a matter of underestimating them, instead it is a realistic assessment given past experience with SVN and my own experience with Hg.
Which of the following approaches would make the most sense? Or if there's a better approach, what is it?
We have a repository foo-submit, read/writable by all, and a repository foo-trunk, readable by all but writable by admins. Users pull from foo-trunk, and push changes to foo-submit. Cleanup: If I find a good change, I let it through as is; if I find a bad change, I "bypass" it by merging with the previous version.
We have a repository foo-trunk readable by all, writable by admins. Each user is responsible for maintaining their own clone which is read-accessible to the rest of the team. When someone wants to push a change, they let me know and I pull it from their repository, with proper cleanup as necessary (same as in #1)
We have a repository foo-dev, read/writable by all, and a repository foo-trunk, readable by all but writable by admins. Users pull/push to foo-dev, and work in named branches if they need to do extensive development. I am responsible for performing merges and cleanup. The foo-trunk repository is merely for having a "clean" copy that has branches where the tip is always in a good state.
Good question, and one that I've never seen a great answer to.
That said, I like option 2. This is the "Pull Request" model used by the Linux Kernel and made popular by GitHub. It allows the admins to act as gatekeepers / reviewers, only allowing good change-sets to get past them when they're happy. If they decide a developer hasn't delivered something worthy, then the pull request is rejected (with reasons). Then the developer can go away, fix up their code / repo, and submit another pull request.
Running a server with something like RhodeCode on it can help keep on top of pull requests. As things grow you can have lower level gatekeepers that deal with subsystems, and higher level gatekeepers that deal with the whole project.
The bit I've never quite got my head around is what should happen to change-sets that are rejected, and that the developer decides to abandon rather than fix up and try again. They could be closed, but then could possibly appear by mistake as part of a future pull request. They'd be harmless, but possibly confusing. The alternative is stripping them, but that sounds like giving people tools they'll cut themselves on.
The other 2 options you give deserve a little comment.
1 is similar to 2. You're still doing a "Pull Request" type flow, but now you have server side branches which mirror the developer's clones. There's little difference and this is how a RhodeCode, GitHub, BitBucket server would let you work, except you don't have to go searching for changes. The server would tell you they're waiting for you to look at.
3 has the problem that everyone's changes are all merged together on foo-dev before you get to them. They would start becoming inter-dependent, and cherry-picking is going to be messy. You'd probably end up grafting change-sets on to foo-trunk which means you're creating new change-sets with new hashes. When the developers pull those they'll now have the change in two places; their original foo-dev version and your grafted foo-trunk version. This doesn't sound sustainable to me.
Best way i can think of if you don't want to use mq (understand with the least hassle for you) is to have your dev
create their own branch for the current feature being developped
merge it back to the main dev branch (or graft/transplant) when it's completed and validated
and then close the branch.
In the long term see for them to learn mq, it's not too hard to grasp.
3a - foo-dev has protected default branch (only some admins can push/merge-to this branch), users use named branches
I'm struggling to find the mercurial workflow that fits the way that we work.
I'm currently favouring a clone per feature but that is quite a change in mindset moving from Subversion. We'll also have issues with the current expense we have in setting up environments.
Using hg pull --rebase seems to give us more of a Subversion-like workflow but from reading around I'm wary of using it.
I think I understand the concepts and I can see that rewriting the history is not ideal but I can't seem to come up with any scenarios which I personally would consider unacceptable.
I'd like to know what are the 'worst' scenarios that hg pull --rebase could create either theoretical or from experience. I'd like concrete examples rather than views on whether you 'should' rewrite history. Not that I'm against people having opinions, just that there already seem to be a lot of them expressed on the internet without many examples to back them up ;)
The first thing new Mercurial converts need to learn is to get comfortable committing incomplete code. Subversion taught us that you shouldn't commit broken code. Now it's time to unlearn that habit. Committing frequently gives you a lot more flexibility in your workflow.
The main problem I see with hg pull --rebase is the ability to break a merge without any way to undo. The DVCS model is based on the idea of tracking history explicitly, and rebasing subverts that idea by saying that all of my changes came after all of your changes, even though we were really working on them at the same time. And because I don't know what your changes are (because I was basing my code off of earlier changesets) it's harder for me to know that my code, on top of yours, won't break something. You also lose the branching capabilities by rebasing, which is really the whole idea behind DVCSs.
Our workflow (which we've built an entire Mercurial hosting system around) is based on keeping multiple clones, or branch repositories, as we call them. Each dev or small team has their own branch repository, which is just a clone of the "central" repository. All of my new features and large bug fixes go into my personal branch repo. I can get that code peer reviewed, and once it's deemed ready, I can merge it into the central repo.
This gives me a few nice benefits. First, I won't be breaking the build, as all of my changes are in their own repo until they're "ready". Second, I can make another branch repo if I need to do a separate feature, or if I have something longer-running, like for the next major version. And third, I can easily get a change into the central repo if there's a bug that needs to be fixed quickly.
That said, there are a couple different ways you can use this workflow. The most simple, and the one I started with, is just keeping separate clones. So I'll have website-central, website-tghw, etc. It works well, especially since you can push and pull between them locally. More recently, I've started keeping multiple heads in the same repo, using the remotebranches extension to help manage them and hg nudge to keep from pushing everything at once.
Of course, some people don't like this workflow as much, usually because their Mercurial server makes it hard to make server-side clones. In that case, you can also look at using named branches to help keep your features straight. Unfortunately, they're not quite as flexible as Git branches (which is why we prefer branch repos) but they work well once you understand how to close branches, and why you can't really get rid of them once you start one.
This is getting a bit long, so I'll wrap it up by encouraging you to embrace the superior branching and merging that Mercurial provides (over SVN). There is definitely a learning curve, but once you get the hang of it, it really does make things easier.
From the question comments, your root issue is that you have developers working on several features/bug fixes/issues at one time and having uncommitted work in their working directory along with some completed work that is ready to be pushed back to the central repository.
There's a really nice exchange that covers the issue well and leads on to a number of ways forward.
http://thread.gmane.org/gmane.comp.version-control.mercurial.general/19704
There are ways you can get around keeping your uncommitted changes, e.g. by having a separate clone to handle merges, but my advice would be to embrace the distributed way of working and commit as often as you like - if you really feel the need you can combine the last few local commits into a single changeset (using MQ, for example) before pushing.
We are looking to move our team (~10 developers) from SVN to mercurial. We are trying to figure out how to manage our workflow. In particular, we are trying to see if creating remote heads is the right solution.
We currently have a very large repository with multiple, related projects. They share a lot of code, but pieces of the project are deployed by different teams (3 teams) independent of other portions of the code-base. So each team is working on concurrent large features.
The way we currently handles this in SVN are branches. Team1 has a branch for Feature1, same deal for the other teams. When Team1 finishes their change, it gets merged into the trunk and deployed out. The other teams follow suite when their project is complete, merging of course.
So my initial thought are using Named Branches for these situations. Team1 makes a Feature1 branch off of the default branch in Hg. Now, here is the question. Should the team PUSH that branch, in it's current/half-state to the repository. This will create a second head in the core repo.
My initial reaction was "NO!" as it seems like a bad idea. Handling multiple heads on our repository just sounds awful, but there are some advantages...
First, the teams want to setup Continuous Integration to build this branch during their development cycle (months long). This will only work if the CI can pull this branch from the repo. This is something we do now with SVN, copy a CI build and change the branch. Easy.
Second, it makes it easier for any team member to jump onto the branch and start working. Without pushing to the core repo, they would have to receive a push from a developer on that team with the changeset information. It is also possible to lose local commits to hardware failure. The chances increase a lot if it's a branch by a single developer who has followed the "don't push until finished" approach.
And lastly is just for ease of use. The developers can easily just commit and push on their branch at any time without consequence (as they do today, in their SVN branches).
Is there a better way to handle this scenario that I may be missing? I just want a veteran's opinion before moving forward with the strategy.
For bug fixes we like the general workflow of Mercurial, anonymous branches that only consist of 1-2 commits. The simplicity is great for those cases.
By the way, I've read this, great article which seems to favor Named branches.
You're definitely thinking about this right, and it sounds like you're going down a good path. I'm a branches as clones person, but named branches have come a long way.
Having a central-ish repo to which all named branches are pushed is convenient for control and backups. Teams working on only branch X can easily create their own branch X only repo by doing hg clone -r X central-ish repo.
The best thing you can do to help the teams out is to let them do clones themselves somewhere that's sitting behind a hgwebdir.cgi instance (as, presumably your central-ish repo will be). You'll find not just teams, but sub-teams and pairs of teams will set up their own repos for mini-efforts you never new about. They'll put them on the named branches that make sense to them and merge back into central as appropriate.
I would make the decision if these three projects should go into one repository by the coupling between these projects (and how many patches are interchanged within them). The more independent they are the less are the advantages of having them in one repo (backup and management aside). There are some different kind of setups:
As you showed, one repository, with one branch for the shared code, and one branch for each project. When the projects itself are generated by forking the shared code base care must be taken when merging back to common (cherry-picking). When inside of each project-branch updates to the common-branch are generated as direct ancestors of the common-branch, and get merges into the project-branch, chances are good they can also be merged back into common. But if changes to common are developed on top of the project branch, merging back will require cherry-picking. I don't have experiences with such a setup, but I fear that the merges can get problematic.
one repo for the shared code and one for each project, connected by symlinks or as subrepo. Here care must be taken to not step on each others feed. In my experience this kind of usage has the potential to grow into a very big PITA. OTOH you seem to have this setup already and your fellow developers can work with it.
one repo for shared and one for each project, with the code from the shared one used as internal releases. I would go for this setup when there are not big regular changes on the shared code base.
All these situations can also be combined with customization-branches for each project within the common part. But I would try to minimize the number of currently active branches, since every new branch requires additional care of merges.
I'm sorry to not give a concrete answer, but "The right thing" (TM) depends to much on the local details.
My company is switching from Subversion to Mercurial. We're using .NET for our product. We have a solution with about a dozen projects that are separate modules with no dependencies on each other. We're using a central repo on a server with push/pull for our integration build.
I'm trying to figure out if I should create one central repo with all the projects in it, or if I should create a separate repo for each project. One argument for separate repos is that branching the individual modules would be easier, but an argument for a single repo is easier management and workflow.
I'm very new to hg and DVCS, so some guidance is greatly appreciated.
ETA: At hginit.com, Joel says:
[I]f you’re used to having one big
gigantic repository for the whole
company, where some people only check
out and work on subdirectories that
they care about, this isn’t a very
good way to work with Mercurial—you’re
better off having lots of smaller
repositories for each project.
It'd be great if someone could expand on this or point me to more documentation.
One thing you should take into consideration here is the fact that Mercurial does not support checking out directories like subversion does. One typical subversion setup is to have one giant repo with multiple separate projects in it, and when somebody needs code they will just checkout a subdirectory containing that project. You can't do this in mercurial. You either take the whole repo, or nothing. If everybody working on these projects does not need all the code, all the time, you might want to split it up into separate repositories.
EDIT: This link might be helpful in setting things up, in particular the "Publishing Multiple Repositories" section.
if completely separate repos don't work for you maybe have each project as a subrepo of some umbrella repo. I have to say that seperate repos sounds like what you need though given that each project sounds totally independent.
I'm fairly new to Mercurial myself (my company is making the leap from SourceSafe) so I don't know what more experience would say.
For me it makes sense to have one repository per Visual Studio Solution. If your modules are truly not dependent on each other, why are they all in the same solution? If you have a good reason for them all being in one solution, then that's probably the reason to keep them in one repository. If there's not a good reason for them to be in one solution, then a repository and a solution for each makes more sense to me.
Edit: So, since all the modules are built together and need to integrate, that would push me towards a single solution and a single repository.
Mercurial does a great job of merging, but the one thing I've had issues with is the solution file when merging the addition of more than one project at a time. It gets confused with multiple End Project lines. So, as long as you aren't adding new projects very often, your merges should be smooth.
From my experience, and not based upon studies etc, I would say that each logical blob is a repository. If you share code between subprojects, they need to be in the same repo. There will be full subrepo functionality, but currently (apr 2010) it's not fully implemented.
Im in the process of trying to get my head round a dvcs such as mercurial. Im getting quite confused with certain points though. Firstly, a bit of context:
At the minute i mostly use subversion, and it works fine for my workflow,
Mostly the repository is for my own use, im the only web developer,and i only ever submit raw code to my manager, he never has to see the repository.
I use the repo to create major versions, and as backup so i can revert to it when something doesnt work out.
The repo also acts a file share, enabling me to work from the same codebase at work and at home.
My main reason for wanting to switch to mercurial, is the offline commits and easier branching / merging.
Firstly can anyone tell me how i would get mercurial to fit this workflow?
How do i go about sharing multiple repositories (i.e. one for each project) between computers?
Any help would be hugely appreciated,
Thanks
http://hginit.com/
There is a fantastic pre-chapter there specifically for SVN users. The rest of the tutorial will get you on your feet fairly quickly.
I'll answer just one part of your question, that of how to manage access to your repository from both home and work, because this is one of the situations where distributed version control is really useful.
The answer is that your two repositories are clones of one-another (to be correct, one is the clone of the other). You do some work during the day, check it in, then pull that work to your home repository (or push, but that requires more work). The next morning, you do the same thing in reverse. Mercurial comes with a built-in read-only HTTP server that makes it really easy, provided that you can expose a port.
The end result is that you have two repositories (ie, automatic backup of the entire history). At any given point in time, one is "better" than the other, but since you're the sole committer to both, they won't diverge.