We come from a subversion background where we have a QA manager who gives commit rights to the central repository once he has verified that all QC activities have been done.
Me and a couple of colleagues are starting to use mercurial, and we want to have a shared repository that would contain our QC-ed changes. Each of the developers hg clones the repository and pushes his changes back to the shared repository. I've read the HG init tutorial and skimmed through the red bean book, but could not find how to control who is allowed to push changes to the shared repository.
How would our existing model of QA-manager controlled commits translate to a mercurial 'central' repository?
HenriW's comment asking how you are serving up the repositories is exactly the right question. How you set up authentication depends entirely on how you're serving your repo (HTTP via Apache, HTTP via hg-serve,, ssh, etc.). The transport mechanism provides the authentication and then mercurial uses that with the commands from Mr. Cat's link (useless in and of themselves) to handle access control.
Since you didn't mention how you're serving the repo, it was probably someting easy to set up (you'd have remembered to mention the hassle fo an apache or ssh setup :). So I'll ugess those two:
If you're using hg serve then you don't have authentication setup. You need to use apache, lighttp, or nginx in front of hgweb or hgwebdir to provide authentication. Until you do the allow_* and deny_* options are strictly everyone or no one.
If you're using ssh then you're already getting your authentication fromm ssh (and probably your OS), so you can use the allow_* and deny_* directives (and file system access controls if you'd like).
serverfault.com has a relevant question and links to the Publishing Repositories Mercurial Wiki page. The first shows how to configure per-repository access when using hgweb on the server. I get a feeling that you're using ssh which the wiki page labels as "private" and am therefore inclined to believe you would have to fall back to file-system access control, i.e. make all the files in the repository belong to the group "commiters", give group members write access and everyone else read/only.
Related
I'm cloning openJDK source code to my local repo, and I'd like be sure that the file integrity has been maintained in transit.
hg clone http://hg.openjdk.java.net/jdk8/jdk8/
The Mercurial FAQ says that revlogs are checked against their hashes, "But this alone is not enough to ensure that someone hasn't tampered with a repository. For that, you need cryptographic signing." Does that mean I need to use
SSH like this?
hg clone ssh://hg.openjdk.java.net/jdk8/jdk8/
And to get SSH access, do I have to sign up as a contributor to the project? If so, is there a way to get a verified clone without becoming a contributor?
SSH provides transport encryption. It ensures the data cannot be altered in-transit from the OpenJDK repository to your computer.
The Mercurial FAQ is talking about signing the commits, which ensures that you can later verify that these commits have not been tampered with individually. It means an attacker cannot break into the OpenJDK servers or falsify a commit for upstream acceptance, adding in revisions that the project doesn't mean to be there. You would be able to recognise such revisions because they lack the right signature or are not signed at all.
SSH wouldn't protect you against such issues, because SSH doesn't care about the data it transfers. Malicious (altered or added) commits are transferred just as securely as valid revisions.
Signed commits is not something you as a consumer of the repository can add later. The OpenJDK project would have to build signing into their committing procedure from the start.
A noob question... i think
I use Mercurial for my project on my laptop. How do i submit the project to an online server like codeplex?
I'm using tortoisehg and i cant find the upload interface for submit the project online...
From the command line, the command is:
hg push <url>
to push changes a remote repository.
In TortoiseHg, this is accessed through the "Synchronize" function, which seems to show up if you right-click in a Windows Explorer window but not on any file. It's also available in the workbench; the icon is 2 arrows pointing in a circle.
For these things, I find the best way to go is to use the command line interface - TortoiseHG is OK if you need to perform some common operations from the file browser, and it's a nice tool to visualize some aspects of your repository, but it doesn't implement all of mercurial's features in full detail, and it renames and bundles some operations for no apparent reason.
I don't know how things work at codeplex, but I assume it is similar to bitbucket or github, in which case here's what you'd do:
Create an empty repository on the remote end (codeplex / bitbucket / ...).
Find the remote repository's URL - for bitbucket, it is https://bitbucket.org/yourname/project, or ssh://hg#bitbucket.org/yourname/project.
From your local repository, commit all pending changes, then issue the command: hg push {remote_url}, where {remote_url} is the URL of the remote repository. This will push all committed changes from your local repository to the remote repository.
Since the remote's head revision (an empty project) is the same as the first revision in your local copy (because all hg repositories start out empty), mercurial should consider the two repositories related and accept the push.
For an introductory guide to command-line mercurial, I recommend http://hginit.com/
Is there a tool out there (preferably web-based) which would automatically detect commits to a BitBucket repository, and at that time, copy all files in the repository to a web-server via FTP?
I basically want a quick and painless way (if one exists) to set up continuous integration between my BitBucket repository and my website.
No build/compilation step would be necessary, since these are only front-end (HTML/CSS/Javascript) files.
The changegroup hook is the way to do this. See Hooks for info about what to do with it.
I've used changegroup hooks on my own hg repositories, but not in BitBucket; it's possible that the BitBucket servers are restricted in what you can do, I'm not sure. I do know a wget/curl attempt to rebuild a manual upon my server upon updating its contents in a repository on SourceForge failed for me because they've locked up their servers too tightly (sending an email from the hook would work but not http access). I would expect BitBucket to be set up better; a quick search for "bitbucket changegroup hook" doesn't seem to indicate that there are any problems with it. Try it and see!
I wonder, if it's possible to create and serve to the clients Mercurial repository on the
some FTP folder with RW access . Did someone do a thing like that ?
Thank you in advance.
Just for the sake of completeness, because I had the same problem and feel that there is another, much simpler solution:
Mercurial cloning on local folders "just works", so if you mounted the ftp as a local folder or drive, you could just push/pull/clone to that (and have your repository end up on the ftp).
On Windows, you can e.g. use FTPUse or NetDrive to have your FTP folder mounted as a local drive, the former is free but a CLI tool which removes the virtual drives if the program is closed, the latter has a GUI but is only free for personal use and doesn't work (yet) on Win8. I don't have a Linux machine at hand now, but you should be able to achieve the same using ftpfs.
Once you did it (and your ftp server is now mapped e.g. to f:), you can simply use that virtual drive (or any subfolder) as a remote target for your mercurial operations. Works like a charm for me.
All things are possible. But that would be hard.
The bit where the network transport matters is when cloning a repository, and the standard ways of doing that depend on either serving over HTTP, or having SSH access to the repository host. There's no FTP-based transport for cloning as far as I can see.
If that's the only sharing mechanism you have available, then you could probably work something out using Mercurial bundles. The procedure would be something like the following:
Commit your edits to a local repository
Make a bundle using hg bundle --all my-bundle.hg
FTP my-bundle.hg to the server
The other users of the repository can then use FTP to retrieve the my-bundle.hg file to their local machine, go to their local copy of the repository, and then hg pull my-bundle.hg to pull in any revisions which are in the bundle but not in the local repository. When they want to share their changes, they make a fresh bundle as above, and push that back to the server. The --all option puts all of the changesets into the bundle file -- you can be cleverer and only export 'recent' changes, but that gets a little more complicated and risks losing changesets: using --all is brutal but fail-safe.
There's obviously a fair amount of scope for confusion here, and race conditions (timestamped filenames might help), and hair-pulling-out, and your users would doubtless appreciate some scripts to make this easier, but if all you've got available is an FTP server, you don't have very many options.
Good luck.
This question on SuperUser might be interesting. The core idea seem to evolve around running a background process that synchronizes a local folder with a remote ftp folder. Which might of use to you.
But I dont know what happens when more than one user tries synchronize at the same time. Since using this approach bypasses all the protection that mercurial has regarding locking the tree and such.
I'm starting to use Mercurial on my web server (in this case MediaTemple's Grid). I've used SVN previously, though I'm not an expert of version control systems. I'm just needing a little help with clearing up some confusion with getting it set up optimally.
I have a 'data' folder which is outside the web server root and that the browser cannot access. It was recommended to me before to have my Mercurial repositories setup here, then I would clone from here locally on my computer. I would also have a 'domains' folder that is basically the web server root and inside there is my actual domains where my websites are actually served to the browser - these would need to be updated from the 'data' repositories too.
But with this in mind, after setting it up, it seems inefficient... I'm cloning to my local (that makes sense), adding, committing, pushing. That's fine... But then I'm then updating in my data repository folder and then updating in my domains folder to actually update my websites.
Surely, I don't actually need this 'data' folder for repositories? Wouldn't my actual live 'domains' folders be the main repositories themselves? So I'm cloning locally and updating from these? Please help me clear some confusions with all this (if you can).
It's strictly a matter of personal preference. Some folks make their live websites also the "master" repo, and some make it a clone of an elsewhere located repo. What you're doing right is serving your sites from directory in the repo, that's a good choice.
Some considerations as to whether you might want separate 'data' clones independent from the web root clones are:
do you want to have multiple heads in the same branch which might confuse the person updating the main repo?
do you want a repo to which people you don't trust with editing the live website can push so that a trusted admin (you?) does the push/pull from data to webroot?
One thing to note is that in the 'data' repo you can do hg update -r null which gets rid of the working copy (but keeps the repo!), so that the diskspace used is almost zero (assuming it's a clone of the webroot they'll share the same underlying files at the FS hardlink level).
I do have a repos (data) folder outside the website root, containing various repositories, and served through hgwebdir on a separate domain (hg.mywebsite.com).
However, my website’s repository I do store in the httpdocs directory of the main domain. I test on my local environment and then pushing my changes to the server will also publish them.
To achieve this I have this in my hgweb.config:
private/mywebsite = ../../../httpdocs
And this in that repository’s hgrc:
[hooks]
changegroup.update = hg update
This hook will update the working directory to the tip whenever changes are pushed. Of course I have also added a rule to the Apache configuration to ignore the .hg directory, and on the subdomain hg runs on, a rule to require authorisation for accesses to the private/ paths.
An alternative would be to instead host the repository together with the others, and then ‘hg archive’ into the httpdocs directory. A little more secure, a little slower, and as for convenience I would say it’s 50-50.
p.s. also adding a hook to forbid creation of remote branches may be a good idea, if people who might do push -f can access your repositories.