Is there anything like svnserve for Mercurial? - mercurial

I'm attracted to Mercurial as a DVCS platform, but would like an easy to use server similar to svnserve. There is HgServe, but that appears to be read-only. If I want to be able to host the server on another machine, it appears I need to set up apache, etc. Is that really the case? Is there an easier method for a local network where security isn't an issue?

The problem here is that it's so easy, the mercurial documentation fails to appropriately cover it. If you clone with ssh:
hg clone ssh://user#host//path/to/repo /local/path
It will do the right thing on the "server" system (it automatically runs hg serve on the other end for the duration of the operation), and then any subsequent operations (push, pull, etc.) will be automatically run over ssh. (Make sure you use the double slash after the hostname if you want your path to start at the filesystem root, otherwise it'll start wherever ssh puts you).
Note that Hg "users" are separate from ssh users, so if you want everyone to use the same restricted account for ssh, they can - hg will still identify their changesets by the user set up in their .hgrc.

Related

How to automatically keep remote mercurial repository at tip after pushes

I created a mercurial repository on some file servers net share.
Is it possible to automatically get the remote repository updated to tip if somebody pushes its changes?
Because some other people (purely users) may copy the repositories content (rather than cloning, because of lack of .hg) and i want them to get the newest version.
Since it is a share on a simple NAS it would be good if the pushing client could invoke this update.
It seems that a hook on the changegroup event can solve this.
Add the following lines to the repository's configuration file (repo/.hg/hgrc)
[hooks]
changegroup = hg update
This solution was suggested on a slightly different question:
Cloning mercurial repo to the remote host
At least under windows this seems only to work on local repositories. The reason for this is, that hg tries run a cmd on the remote path that fails, since it does not support UNC paths as current direcory.
Adding explicitly the repository url fixes this, but its not client independent anymore.
[hooks]
changegroup = hg update -R %HG_URL%
You could treat the server repository as your "local working directory" and then PULL from your own PC to that location. If you use hg pull --update then it will automatically update the working folder to the latest.
One way to do this is to login to your NAS and physically run the hg command line program there. If not, you could also mount the NAS folder on your local PC and then chdir to its mapped local folder and use your local hg client to do so.
This might seem like an odd thing to do but Mercurial doesn't care which is the "clone" and which is the "server", you can swap them interchangeably in your workflow.

Mercurial: remote operation with different environments (remotecmd .hgrc)

There are two project in which I collaborate, they live in different servers, A and B.
While A has the hg program in /opt/mercurial/bin/hg, B has it in /usr/local/bin/hg.
When I want to pull/push changes to either remote, I have to manually modify .hgrcfile in order for the option remotecmdto point to the right location of hg.
I would like to know if it is possible to setup different remotecmdpaths for different remotes so that I don't have to manually change the path of the hgprogram everytime I need to do some remote operation.
I saw this question: Setup platform-dependent hgrc but it seemed to me that there should be a more native (something like a built-in setting) way to do this. So far I haven't been able to find it, so any help will be welcome :)
Thanks!
Ideally you just get the hg binary into your $PATH on both servers and you don't have to think about it anymore. That can be done in the system's /etc/profile or in your .ssh/enviironment on that remote server. Most people never even need to think about remotecmd.
If they're separate projects (or even separate clones locally of the same project) you can set the remotecmd in the .hg/hgrc file within each repository -- settings don't just have to be in your ~/hgrc file.
Also be aware that --remotecmd is available as a command line option for push and pull, so you could use that and even combine it with something like:
[alias]
pusha = push --remotecmd /opt/mercurial/bin/hg
and then you can just do hg pusha
Really though, just try to get hg into your path like everyone else does.

Hgweb and changegroup hook not working

I'm using hgweb to publish my local repositories.
/project_path/project_name/.hg/.hgrc have:
[hooks]
changegroup.bitbucket = hg push ssh://hg#bitbucket.org/user/repo
When i'm use hg serve, all changegroup hooks working fine, but when i'm using hgweb through nginx with fcgi it's not working at all. I need those functionality to have some kind of backups.
It's mostly like Trust.
Mercurial needs to trust a hgrc file before it will parse/run it. If your /project_path/project_name/.hg/.hgrc file is owned by you then when you run hg serve with Mercurial running as you it's parsed/used. However, nginx runs as its own user, probably nginx which doesn't trust files owned by you, so when it invokes Mercurial those files are ignored (see Note).
That Mercurial trust link gives a better explanation and talks about how to say "nginx trusts X", but if it's a single-user system or you want everyone to trust you you can just throw a trust block in the system-global /etc/mercurial/hgrc file saying everyone trusts X.
Note: It doesn't actually just ignore those files it puts a warning on STDERR which in apache-land you'd find in your error.log, but in nginx land no one ever seems to find those warnings so I've no idea where nginx puts them.
I assume you have some kind of authentication issue here. When running hg serve from the command line, your ssh credentials are provided by an ssh-agent running in the background.
However, this does not work when running hgweb as a service, because there is no ssh-agent running in the background. If you would start an ssh-agent, there would be no possibility to enter the password for your ssh key.
Bitbucket uses ssh keys to authenticate you, so you can't just add your password to the above hg push command.
One possible solution would be to not use bitbucket as your backup, but a different mercurial server that let's you provide a simple password on the command line.
I'm afraid I can't help you further with this.

Having ssh not ask for a password every time with Mercurial

I'm a Mercurial newbie and I just started to use it.
I work in local repository and when I commit changes I use hg <command> ssh://user#host/usr/www/site.com/project for pushing, pulling and see the incoming/outgoing changes.
But every time ssh ask me the password. Is there a way for remember my ssh password for this purpose? Also, how can I don't write every time the full command (ssh://user etc etc)?
You have to setup your ssh with public keys. There are many tutorials on the web e.g. see Getting started with SSH
Once you have the keys in place you can either use ssh-agent to only enter your local private-key password once per session. There are also GUI tools that act as ssh-agent (e.g. SSHKeychain on a Mac)
Or if you have low security requirements you can also generate your key without password.
But please don't store cleartext passwords in config files.
There are two possibilities to avoid typing the url on each command:
From hg help urls
These URLs can all be stored in your hgrc with path aliases under the
[paths] section like so:
[paths]
alias1 = URL1
alias2 = URL2
...
The other possibility is using the default paths:
default:
When you create a repository with hg clone, the clone command saves the
location of the source repository as the new repository's 'default'
path. This is then used when you omit path from push- and pull-like
commands (including incoming and outgoing).
Thats what I often use, since usually you get your working directory bay cloning from somewhere and from then on I just don't specify the url and use the default.

Mercurial repository on FTP

I wonder, if it's possible to create and serve to the clients Mercurial repository on the
some FTP folder with RW access . Did someone do a thing like that ?
Thank you in advance.
Just for the sake of completeness, because I had the same problem and feel that there is another, much simpler solution:
Mercurial cloning on local folders "just works", so if you mounted the ftp as a local folder or drive, you could just push/pull/clone to that (and have your repository end up on the ftp).
On Windows, you can e.g. use FTPUse or NetDrive to have your FTP folder mounted as a local drive, the former is free but a CLI tool which removes the virtual drives if the program is closed, the latter has a GUI but is only free for personal use and doesn't work (yet) on Win8. I don't have a Linux machine at hand now, but you should be able to achieve the same using ftpfs.
Once you did it (and your ftp server is now mapped e.g. to f:), you can simply use that virtual drive (or any subfolder) as a remote target for your mercurial operations. Works like a charm for me.
All things are possible. But that would be hard.
The bit where the network transport matters is when cloning a repository, and the standard ways of doing that depend on either serving over HTTP, or having SSH access to the repository host. There's no FTP-based transport for cloning as far as I can see.
If that's the only sharing mechanism you have available, then you could probably work something out using Mercurial bundles. The procedure would be something like the following:
Commit your edits to a local repository
Make a bundle using hg bundle --all my-bundle.hg
FTP my-bundle.hg to the server
The other users of the repository can then use FTP to retrieve the my-bundle.hg file to their local machine, go to their local copy of the repository, and then hg pull my-bundle.hg to pull in any revisions which are in the bundle but not in the local repository. When they want to share their changes, they make a fresh bundle as above, and push that back to the server. The --all option puts all of the changesets into the bundle file -- you can be cleverer and only export 'recent' changes, but that gets a little more complicated and risks losing changesets: using --all is brutal but fail-safe.
There's obviously a fair amount of scope for confusion here, and race conditions (timestamped filenames might help), and hair-pulling-out, and your users would doubtless appreciate some scripts to make this easier, but if all you've got available is an FTP server, you don't have very many options.
Good luck.
This question on SuperUser might be interesting. The core idea seem to evolve around running a background process that synchronizes a local folder with a remote ftp folder. Which might of use to you.
But I dont know what happens when more than one user tries synchronize at the same time. Since using this approach bypasses all the protection that mercurial has regarding locking the tree and such.