I am investigating how to migrate our source control from SVN to Mercurial. One thing I am not sure how to deal with is usernames in commits. From what I've seen, there is no way to force an HG user to use a specific username, even if specified in Mercurial.ini, the user can override it in commits with the -u flag in hg commit.
How do companies deal with this? there is nothing to prevent developer A to commit something in his repository as developer B, and then pushing it to someone else.
Thanks.
I wouldn't say our company is large (4 developers), but it's never been an issue for us so far. I haven't seen any way to prevent that behavior either in my searching. I guess it comes down to an issue of trust amongst your developers.
Unrelated, we did successfully migrate from SVN to Mercurial about two years ago so I may be able to answer other questions you have.
EDIT: An idea:
I'm not sure how you were planning on setting up your topology, but we have a server that functions as the central repository for all our repos. It is possible to push changes between developers (bypassing the central server), but we never do that. We always commit locally and then push/pull from/to the central server. Additionally, we use https and windows authentication to authenticate with this central server.
If you're planning on having something like this, you could create a hook on the server (see repository events) (maybe the precommit event) that would verify that the user name in each commit being pushed is the same as the authenticated user from the web server.
Not sure if this would work, but it sounds plausable.
Another attempt(s)
Path-based ACLs in pseudo-CVCS workflow
If you'll use "controlled anarchy" workflow (p2p communications aren't controlled, resticted AND trusted and single authoritative source is common push-target), you can use "Branch Per Developer" paradigm. I.e - with ACL extension on central repo the following restrictions apply:
Nobody can push to default branch
Each developer can push only in his personal branch (under any name, name means nothing, auth-data for tracking is branch-name)
Only trusted mergers can work with repo-Central (merge dev-branches to default, NO rebase|NO history rewrite in dev-branches)
Each mergeset in default branch contain authentication piece - source branch
Signing branches
If you can't trust (and you must not trust) username in commits, you can trust strong crypto. Mercurial have at least two extensions, which allow digitally sign commits, thus providing accurate (so-so, see notes below) information about the authorship with own advantages and disadvantages in both cases
Commitsigs Extension Wiki and Signing Mercurial Changesets on Windows mini-HowTo are complete enough to understand and demonstrate all aspects of the start. Pro: no additional commits for signing, you can't (by design) sign old historic commits. Contra: not-so-nice output of needed commands (see screenshots in Damian's post for log and verifysigs), because it's GnuPG (no PKI), theoretically it's possible to create and use key-pair for any name-email and only "extra" comparison will show two different keys for one user
GPG extension and Approval Reports from wiki as quick-start. Pro: can use pgp-keys or openssl-certs (TBT!!!) (where openssl means one corporate source of issued certs), more readable and informative output of sigcheck command. Contra:
commiting changes to a .hgsigs file in the root of the working copy
and so it requires extra changesets to be made. This makes it
infeasible to sign all changesets. The .hgsigs file must also be
merged like any other file when branches are merged.
and at last file can be modified by hand by malicious user as any other file in WC
Edit and bugfixing
Openssl can be used in Commitsigs, not GPG extension
Related
I'm new to mercurial and I have problems with the solution we're working to implement in my company. I work in a Lab with a strict security environment and the production Mercurial server is in a isolated network. All people have two computers, one to work in the "real world" and another one to work in the isolated and secure environment.
The problem is that we have other Labs distributed around the world and in some cases we need to work together two or more Labs in a project. Every Lab has a HG server for to manage their own projects locally, but I'm not sure if our method to sync common projects is the best solution. To solve that, we use a "bundle" to send the news changesets from one Lab to another. My question is about how good is this method because the solution is a little bit complicate. The procedure is more or less that way:
In Lab B, hg pull and update to be sure about the last version in local folder.
Ask the other about the "hg log", to see what are the last common changeset.
In Lab A: hg pull and update to be sure about the last version in local folder.
In Lab A: Make bundle, "hg bundle --base XX project.bundle" (where XX is the last common changeset).
Send it to Lab B (with a complicated method due the security normative: encrypt files, encrypt drives, secure erases, etc).
In Lab B: "hg unbundle projectYY.bundle" in the local folder.
This process creates two heads, that sometimes force you to make merges.
Once the changesets from Lab A are correctly implemented at Lab B, we need to repeat the process in the opposite direction, to implement the evolution of the project in the Lab B to the Lab A.
Could anyone enlighten me the way to find the best solution to get out of this dilemma?
Anyone have a better solution?
Thanks a lot for your help.
Bundles are the right vehicle for propagating changes without a direct connection. But you can simplify the bundle-building process by modeling communication locally:
In Lab A, maintain repoA (the central repo for local use), as well as repoB, which represents the state of the repository in lab B. Lab B has a complementary set-up.
You can use this dual set-up to model the relationship between the labs as if you had a direct connection, but changeset sharing proceeds via bundles instead of push/pull.
From the perspective of Lab A: Update repoA the regular way, but update repoB only with bundles that you receive from Lab B and bundles (or changesets) that you are sending to Lab B.
More specifically (again from the perspective of Lab A):
In the beginning the repos are synchronized, but as development progresses, changes are committed only to repoA.
When it's time to bring lab B up to speed, just go to repoA and run hg outgoing path/to/repoB. You now know what to bundle without having to request and study lab B's logs. In fact, hg bundle bundlename.bzip repoB will bundle the right changesets for you.
Encrypt and send off your bundle.
You can assume that the bundle will be integrated into Lab B's home repo, so update our local repoB as well, either by pushing directly or (for assured consistency) by unbundling (importing) the bundle that was mailed off.
When lab B receives the bundle, they will import it to their own copy of repoA-- it is now updated to the same state as repoA in lab A. Lab B can now push or pull changes into their own repoB, and merge them (in repoB) with their own unshared changesets. This will generate one or more merge changesets, which are handled just like any other check-ins to lab B's repoB.
And that's that. When lab B sends a bundle back to lab A, it will use the same process, steps 1 to 5. Everything stays synchronized just like they would if the repositories were directly connected. As always, it pays to synchronize frequently so as to avoid diverging too far and encountering merge conflicts.
In fact you have more than two labs. The approaches to keeping them synchronized are the same as if you had a direct connection: Do you want a "star topology" with a central server that is the only node the other labs communicate with directly? Then each lab only needs a local copy of this server. Do you need lots of bilateral communication before some work is shared with everyone? Then keep a local model of every lab you want to exchange changesets with.
If you have no direct network communication between the two mercurial repositories, then the method you describe seems like the easiest way to sync those two repositories.
You could probably save a bit on the process boilerplate on getting the new changesets which need bundling, how exactly depends.
For once, you don't need to update your working copy in order to create the bundles; it suffices to just have the repo, you don't need a working copy.
And if you know the date and time of the last sync, you can simply bundle all changesets added since that time, using an appropriate revset, e.g. all revisions since 30th March this year: hg log -r'date(">2015-03-30")' Thus you could skip a lengthy manual review process.
If your repository is not too big (thus fits on the media you use for exchange), simply copy it there in its entirety and do a local pulls from that exchange disk to sync, skipping those review processes, too.
Of course you will not be able to avoid making the merges - they are the price you have to pay when several people work on the same thing at the same time and both commit to their own repos.
Having myself found in a role of a build engineer and a systems guy I had to learn end figure out a few things - namely how to set up our infrastructure. Before I came on board they didn't have any. With this in mind please excuse me if I ask anything that should have been obvious.
We currently have 3 level distributed mercurial repositories: level one on each of developer machines, level two on central (trunk) server - only accessible from local network and the third layer on BitBucket. Workflow is as follows:
Local development: developer pulls change-sets from local network server. developer commits to local and pushes to our local server once merge conflicts are resolved. A scheduled script overnight backs everything up to BitBucket.
Working from home: developer pulls change-sets from BitBucket. Developer comits to their local repo and push to BitBucket.
TeamCity picks up repo changes from local network server for each project and runs a build / automated deploy to test environment.
The issue I'm hitting is scenario 2: at the moment if someone pushes something to bitbucket it's their responsibility to merge it back when they're back in office. And it's a bit of a time waster if it could be automated.
In case you're wondering, the reason we have a central repo on local network is because it would be slow to run TeamCity builds of BitBucket repositories. Haven't tested so it's just an educated guess.
Anyhow, the script that is scheduled and pushes all changes from central repository on local network just runs a "hg push" for each of repositories. It would have to do a pull / merge beforehand. How do I do this right?
This is what the pull would have to use switches for:
- update after pull
- in case of merge conflicts, always take newer file
- in case of error, send an email to system administrator(s)
- anything extra?
Please feel free to share your own setup as long as it's not vastly different to what's described.
UPDATE: In light of recent answers I feel an important aspect if the intended approach needs to be clarified. The idea is not to force merges on our local network central repo. Instead it should resolve merge conflicts in same was as per using HgWorkbench on developer machines with post pull: update + merge. All developers have this on by default so it should be OK.
So the script / batch file on server would do the following:
pull from BitBucket
update + auto merge
Any merge auto conflicts?
3.1 Yes -> Send an email to administrators to manually merge -> Break
3.2 No -> Cary on
Get outgoing changesets. Will push create multiple heads? (This might be redundant because of pull / update)
4.1 Yes -> Prompt administrators. Break.
4.2 No -> Push changes
Hope this clears things up a bit. Now, can this be done using hg commands alone - batch - or do I have to script it? Specifically can it send emails?
Thanks.
So all your work is available at BitBucket, right? Why not make BitBucket (as available from anywhere) you primary repo source and dropping your local servers? You can pull changes from BitBucket with TeamCity for your nightly builds and developers whould always work with current repo at BitBucket and resolve all merging problems themselves so there wouldn't be any subsequent merges for you.
I would not try to automatically merge the changes if they are conflicting, this will only lead to broken and inconsistent versions and "lost" changes causing confusion and chaos. Don't merge it automatically if it isn't clear how that merge should look like.
A better alternative would be to just keep the two heads around and push/pull them without merging. This way everybody still can get that version of the data he was working on from work/home. A manual merge will have to be done, but this can also be done at work or from home, enabling developers to resolve the issue from wherever they are. You can also send emails around in this scenario to make sure everybody is aware of the problem.
I guess that you could automize this using a script, I would try PowerShell if I were you. However, sometimes this may require manual change merges when there are conflicts (because when developers commit changes to both BB and local repos, these changes might be conflicting).
I use Mercurial in a single-user workflow to have the option to roll back changes if my coding or writing goes horribly wrong (I primarily use the Stata and R statistics packages and LaTeX). While working only locally, this has been easy since all I have is the main repo.
Recently I have started ssh-ing into a Linux server for more computational power. So far I have been manually copying files back and forth and using Mercurial only locally, but I would like to use Mercurial to take care of this and keep these two workflows synchronized. Also, I like the ability to code both locally (on my laptop or desktop) and on the server.
Do I need to work on a clone of the main repo on the server and keep the main repo untouched? Or can I work directly in the main repo when I am on the server? In this question #gizmo points to this workflow guide; the "single developer" discussion is helpful, but it's still not clear to me that I can work in the main repo while I'm on the server without causing some major problem that I don't yet understand.
Thanks!
Edit: I should add that I have worked through Joel Spolsky's HgInit.com tutorial and I'm comfortable pushing/pulling/cloning/etc over ssh, but I am still not sure if I can work in the main repo without causing heartache later. Or maybe this is more a philosophical question? Thanks!
Mercurial is DVCS, it means - in each location you have both: local working copy and local repository
Mercurial is DVCS, it means - you can freely exchange (pull|push) data between repos (if they provide remote-access methods).
If you
comfortable pushing/pulling/cloning/etc over ssh
and don't forget perform pull|push cycle around your work at home (in order to don't run hg serve at home-host and sync from server as source) you don't get any headache at all with perfect linear aggregated history on each place. And even you forget to sync repo sometimes, you get in worst case two heads later, which you'll be able to merge easy (doesn't know formats of Stata and R data-files, but LaTeX, as text, is mergeable)
There is no problem with working directly in the repository on your server. From Mercurial's point of view, the "main" repository is just another random repository — Mercurial doesn't consider it to be special.
You don't say this directly, but one thing that people ask is "What happens when I push to the server?" The answer is that hg push only sends data into the repository (the .hg/ folder). The working copy is not touched on the server when you push to it. Since you push new changesets to the server, you might need to run hg update the next time you work on the server. This is just like if you had run hg pull on the server — there you'll also merge or update afterwards.
I have this situation all the time: I create a repository at home and clone it to my computer at work. I change files in either location and push/pull between the two repositories. If I need to share my work with others, then I make a repository at Bitbucket and push the code there. That way Bitbucket serves as a nice canonical repository for the code and I typically change the default path to Bitbucket in the repositories at home and at work. So at home I would have:
[paths]
default = httsp://bitbucket.org/mg/<repo>/
work = ssh://mg#work/<repo>
so that I can do hg push to send things to Bitbucket and hg pull work to grab things directly from work (in case I forgot to push to Bitbucket before leaving).
At a point in our development process we send all *.resx files to a translator. The translator usually takes a week to send back the files. During this time no one is allowed to add, remove or update any resx file.
How can I configure mercurial to enforce that policy?
Our setup: Each dev works with a local clone of our central repository.
Nice to have:
I'll turn the "policy" on and off every few weeks. So ideally, I'd like something that is easy to configure at one place and that affect all devs.
I'd rather enforce that policy at the local repository level then at the central repository level because if we prevent the "push" on the central repository, it will be harder for the dev to undo the already locally committed changeset.
Thanks
UPDATE:
More info on the translation process:
Merging is not an issue here. The translator does not change the files that we sent to him. We send him a bunch of language neutral .resx (form1.resx) and returns a bunch of language specific resx (form1.FR.resx).
Why prevent adding new resx? Adding a resx occurs when we add a new UI to our application. If we do that after the translation package has been sent, the translator won't know about the new UI and we'll end up with a new UI with no translation.
Why prevent updating resx? If the dev changes a label value from "open" to "close", he has made a very important semantic change. If he does that after the translation package has been sent, we won't get the right translation back.
You cannot stop by people from committing changes to .resx files unless you have control over their desktop machines (using a pretxncommit hook), and even then it's easily bypassed. It's much more normal to put the check on the central server at time of push using a pretxnchangegroup hook, but you're right that they'll have to fix up any changesets and re-push, which is advanced usage. In either case you'd used the AclExtension to enforce the actual restriction.
Here are two alternate ways to go about this that might work out better for you:
Clone your repository at the start of the translation process, warn developers to leave .resx alone for awhile, apply the work of the translators when they're done, and then merge those changes back into the main development repository with a merge command that always gives the incoming changes priority: X . Then use a simple hg log command to find all the changes in .resx that just got overwritten and tell the developers to re-add them. Chide them at this time.
alternately
Make the .resx files a Subrepository of the larger outer repository. Then turn off write access to that resx repository during the forbidden period. Developers will be able to commit in the outer repository but not the inner one, but clones will still get both exactly as they always did.
For what it's worth, everyone else handles this problem with simple merging, .resx is (XML) text, and it merges just fine.
When working with a DVCS it's not always easy to exactly mirror your svn experience, but there's usually a better option anyway.
You could add *.resx to the hgignore file
We got all psyched about from from svn to hg and as the development workflow is more or less flushed out, here remains the most difficult part - staging and integration system.
Hopefully this question goes a bit further then your common 'how do I move from xxx to Mercurial'. Please forgive long and probably poorly written question :)
We are web shop that does a lot of projects(mainly PHP and Zend), so we have one huge svn repo, with like 100+ folders, each representing a project with it's own tags,branches and trunk of course. On our integration and testing server(where QA and clients look at work results and test stuff) everything is pretty much automated - Apache is set to pick up new projects automatically creating vhost for each project/trunk; mysql migration scripts right there in trunk too and developers can apply them through simple web-interface. Long story short our workflow is this now:
Checkout code, do work, commit
Run update on the server via web interface(this basically does svn up on server on a particular project and also run db-migration script if needed)
QA changes on the server
This approach is certainly suboptimal for large projects when we have 2+ developers working on the same code. Branching in svn was only causing more headaches, well, hence moving to Mercurial. And here is where the question lies - how does one organize efficient staging/integration/testing server for this type of work(where you have many projects, say single developer could be working on 3 different projects in 1 day).
We decided to have 'default' branch tracking production essentially and then make all changes in individual branches. In this case though how can we automate staging updates for each branch? If earlier for one project we almost always were working on trunk, so we needed one DB, one vhost, etc. now we potentially talking about N-databases per project, N-vhost configs and etc. Then what about CI stuff(such as running phpDocumentor and/or unit tests)? Should it only be done on the 'default'? On branches?
I wonder how other teams solve this issue, perhaps some best practices that we're not using or overlooking?
Additional notes:
Probably worth mentioning that we've picked Kiln as a repo hosting service(mostly since we're using FogBugz anyway)
This is by no means the complete answer you'll eventually pick, but here are some tools that will likely factor into it:
repositories without working directories -- if you clone -U or hg update null you get a repository with no working directory (only the .hg). They're better on the server because they take up less room and no one is tempted to edit there
changegroup hooks
For that last one the changegroup hook runs whenever one or more changesets arrive via push or pull and you can have it do some interesting things such as:
push the changesets on to another repo depending on what has arrived
update the receiving repo's working directory
For example one could automate something like this using only the tools described above:
developer pushes five changesets to central-repo/project1/main
last changeset is on branch 'my-experiment' so csets are automatually re-pushed to optionally created repo central-repo/project1/my-experiment
central-repo/project1/my-experiment automatically does hg update tip which is certain to be on the my-expiriment branch
central-repo/project1/my-experiment automatically runs tests in its working dir and if they pass does a 'make dist' that deploys, which might set up database and vhost too
The biggie, and chapter 10 in the mercurial book covers this, is to not have the user waiting on that process. You want the user to push to a repo that contains possibly-okay-code and the automated processed do the CI and deploy work, which if it passes ends up being a likely-okay repo.
In the largest mercurial setup in which I've worked (20 or so developers) we got to the point where our CI system (Hudson) was pulling from the maybe-ok repos for each periodically then building and testing, and handling each branch separately.
Bottom line: all the tools you need to setup whatever you'd like probably already exist, but gluing them together will be one-off sort of work.
What you need to remember is that DVCS (vs. CVCS) introduces another dimension to versioning:
You don't have to rely anymore only on branching (and get a staging workspace from the right branch)
You now have with DVCS the publication workflow (push/pull between repo)
Meaning your staging environment is now a repo (with the full history of the project), checked out at a certain branch:
Many developers can push many different branches to that staging repo: the reconciliation process can be done in isolation within that repo, in a "main" branch of your choice.
Or they can pull that staging branch in their repo and test things out before pushing back.
From Joel's tutorial on Mercurial HgInit
A developer don't necessary have to commit for other to see: the publication process in a DVCS allows for him/her to pull the staging branch first, reconcile any conflict locally, and then push to the staging repo.