Mercurial: remote operation with different environments (remotecmd .hgrc) - mercurial

There are two project in which I collaborate, they live in different servers, A and B.
While A has the hg program in /opt/mercurial/bin/hg, B has it in /usr/local/bin/hg.
When I want to pull/push changes to either remote, I have to manually modify .hgrcfile in order for the option remotecmdto point to the right location of hg.
I would like to know if it is possible to setup different remotecmdpaths for different remotes so that I don't have to manually change the path of the hgprogram everytime I need to do some remote operation.
I saw this question: Setup platform-dependent hgrc but it seemed to me that there should be a more native (something like a built-in setting) way to do this. So far I haven't been able to find it, so any help will be welcome :)
Thanks!

Ideally you just get the hg binary into your $PATH on both servers and you don't have to think about it anymore. That can be done in the system's /etc/profile or in your .ssh/enviironment on that remote server. Most people never even need to think about remotecmd.
If they're separate projects (or even separate clones locally of the same project) you can set the remotecmd in the .hg/hgrc file within each repository -- settings don't just have to be in your ~/hgrc file.
Also be aware that --remotecmd is available as a command line option for push and pull, so you could use that and even combine it with something like:
[alias]
pusha = push --remotecmd /opt/mercurial/bin/hg
and then you can just do hg pusha
Really though, just try to get hg into your path like everyone else does.

Related

push to configured hg repository from web interface

I have a small group of developers and we all develop on our own machines. When we have code that is ready for testing, we merge and push to a RhodeCode installation. The hgrc file for my central RhodeCode repo is set up like this:
[paths]
test_env = /www/mysite/test
prod_env = /www/mysite/prod
[hooks]
changegroup = hg push test_env
so when a person checks code into RhodeCode, the changes are automatically pushed to the test environment. (There's a hg update in the test repo hgrc file, so the code updates there). This is perfect.
BUT.. I would like our RhodeCode admins to be able to push to prod without needing shell access on the server. Is there a way to allow someone to run a "hg push prod_env" from the RhodeCode interface? I figure since RhodeCode has full control over hg, it should be possible, but does this ability exists somewhere in RhodeCode? would it be a huge task to add it?
If not, how would you go about allowing an authenticated user to push a repository to production without shell access? I have been googling, but I can't seem to find anything. I know I could write a php script with a passthru("hg push test_env), but that seems like a permissions nightmare as apache runs as "nobody" and rhodecode owns the repo.
Thoughts?
Obviously, you cannot push nothing. But you can try to add or edit some file from the RhodeCode interface (which allows this to do) at the prod_env. This should cause local commit and push without accessing a shell.
For those looking at this question, here's how I solved it:
Wrote a passworded page in PHP with a button that executes this code:
shell_exec('hg pull -R ../wp-content/themes/2014');
I then put hg update in the hgrc file for the prod website, and made the web user and authorized user of the repository.
It works pretty good - i have slight security concerns because of the resulting file ownership, but assuming the PHP follows proper practice, there aren't any problems.

Setting up a mercurial mirror

Can anybody tell me how to set up a mirror of a mercurial repository? I have a mercurial repo on my laptop, but want to auto mirror the repo on a NAS drive as a form of backup. Ideally, it would be cool if the solution checks a known location for a repo, and if one doesn't exist, create it, and from then on mirror any changes.
Another thing to bear in mind is that the NAS may not always be available, so I would need to accomodate this in some way.
I did something similar with git, but all the functionality should be in mercurial too.
I created manually a clone on some server (in my case a VPS somewhere on the net in case my house burns down with NAS and laptops in it).
With git you can create a "naked" repository, i.e. w/o a branch checked out.
Then I regularly push to it.
This can be automated using 'hooks', more info here .
The trick is to get the handling off the commit hook (oun intended) and that the syncing is not in your workflow. Run your push script using the 'at' command in a couple of minutes time. Then it runs asynchronously in the background. I would not be fancy here, try and handle failures gracefully.
You now have a setup which will keep the backup synched within a couple of minutes.
Mercurial gives you the freedom to do that however you would like. If you wanted, you could just setup a process to copy the repo from your local machine to the NAS at a regular interval. Everything about the repo is stored in the directory, and everything in the directory is just a file.
However, it sounds to me like you want to setup something more akin to a version control system like Subversion. I do something like this with one of my projects (actually, I moved it from SVN to Mercurial, but that's a different answer).
I have a repository on xp-dev.com and my local repository on my computer. I do all of the work on my local repository I want to do, issuing hg com very frequently. When I am done for the day/night I do a hg push ssh://hg2.xp-dev.com/myrepo to send all of my local changes to the remote server.
So, really all you want to do is an hg push to put your local repo on your NAS and then remember to do it again on a regular basis.

Is there anything like svnserve for Mercurial?

I'm attracted to Mercurial as a DVCS platform, but would like an easy to use server similar to svnserve. There is HgServe, but that appears to be read-only. If I want to be able to host the server on another machine, it appears I need to set up apache, etc. Is that really the case? Is there an easier method for a local network where security isn't an issue?
The problem here is that it's so easy, the mercurial documentation fails to appropriately cover it. If you clone with ssh:
hg clone ssh://user#host//path/to/repo /local/path
It will do the right thing on the "server" system (it automatically runs hg serve on the other end for the duration of the operation), and then any subsequent operations (push, pull, etc.) will be automatically run over ssh. (Make sure you use the double slash after the hostname if you want your path to start at the filesystem root, otherwise it'll start wherever ssh puts you).
Note that Hg "users" are separate from ssh users, so if you want everyone to use the same restricted account for ssh, they can - hg will still identify their changesets by the user set up in their .hgrc.

Mercurial - How to stop tracking modified file but keep the first version in repository

I create the hg repository with my source tree. I want to keep the first version of some files such as Makefile in the repository and then hg don't see it modified even through I modified it.
Original problem is that ./configure usually modifies the Makefile but I don't want the build files to committed in the repository. So I want to keep only first version of configure and Makefile in the repository so that everybody who clone my repository can run ./configure by themself and not bother the repository
I tried hg remove or hg forget but those are stop tracking and also delete the files in the next revision of reporitory.
.hgignore doesn't do the things too.
I think of hg revert everytimes I run ./configure or make but it's not efficient way.
Are there any better ways?
Its usually good form to not track the configure script at all. There are some reasons for this:
Its huge. I've seen code bases where the configure script and helper macro libraries were more than ten times the size of the actual code being compiled.
When other developers make changes to configure.in(.ac), they are going to need to commit a new configure script. If three people do that, there's a good chance that Mercurial will require at least one of them to manually resolve a merge conflict in configure itself. Keep in mind, configure is machine generated, attempting to read it (much less resolve merge conflicts) may make your eyes bleed.
Generally, you'll offer a program in source form via two methods:
Download of a release archive (e.g. foo-1.2.3-rc2.zip), this can contain the configure script.
Downloading the repository directly using Mercurial. If they want to work with that, they'll need to have autoconf installed.
In the root of my repositories, I usually include a file called autogen.sh that runs all of the steps needed (aclocal, autoconf, ...), which also handles alerting the user if they need something installed. I.e. Could not find tool aclocal, please install the autoconf package.
Its really best to just go with the autogen.sh method. This means only tracking configure.in (or configure.ac) and the associated Makefiles (from Makefile.in). Let each build configure their own, and provide a distclean target to remove all files configure generates. Finally, provide a maintainer-clean target to remove anything that the configuration suite itself generated, e.g. configure.
That should help make nightly builds easy.
You could try and setup a pre-commit hook which would always restore the original Makefile content if found in the changeset.
The SO question illustrates reading the content of the changeset to be committed.
Make sure to use the pre-commit hook, and not precommit.

Mercurial repository on FTP

I wonder, if it's possible to create and serve to the clients Mercurial repository on the
some FTP folder with RW access . Did someone do a thing like that ?
Thank you in advance.
Just for the sake of completeness, because I had the same problem and feel that there is another, much simpler solution:
Mercurial cloning on local folders "just works", so if you mounted the ftp as a local folder or drive, you could just push/pull/clone to that (and have your repository end up on the ftp).
On Windows, you can e.g. use FTPUse or NetDrive to have your FTP folder mounted as a local drive, the former is free but a CLI tool which removes the virtual drives if the program is closed, the latter has a GUI but is only free for personal use and doesn't work (yet) on Win8. I don't have a Linux machine at hand now, but you should be able to achieve the same using ftpfs.
Once you did it (and your ftp server is now mapped e.g. to f:), you can simply use that virtual drive (or any subfolder) as a remote target for your mercurial operations. Works like a charm for me.
All things are possible. But that would be hard.
The bit where the network transport matters is when cloning a repository, and the standard ways of doing that depend on either serving over HTTP, or having SSH access to the repository host. There's no FTP-based transport for cloning as far as I can see.
If that's the only sharing mechanism you have available, then you could probably work something out using Mercurial bundles. The procedure would be something like the following:
Commit your edits to a local repository
Make a bundle using hg bundle --all my-bundle.hg
FTP my-bundle.hg to the server
The other users of the repository can then use FTP to retrieve the my-bundle.hg file to their local machine, go to their local copy of the repository, and then hg pull my-bundle.hg to pull in any revisions which are in the bundle but not in the local repository. When they want to share their changes, they make a fresh bundle as above, and push that back to the server. The --all option puts all of the changesets into the bundle file -- you can be cleverer and only export 'recent' changes, but that gets a little more complicated and risks losing changesets: using --all is brutal but fail-safe.
There's obviously a fair amount of scope for confusion here, and race conditions (timestamped filenames might help), and hair-pulling-out, and your users would doubtless appreciate some scripts to make this easier, but if all you've got available is an FTP server, you don't have very many options.
Good luck.
This question on SuperUser might be interesting. The core idea seem to evolve around running a background process that synchronizes a local folder with a remote ftp folder. Which might of use to you.
But I dont know what happens when more than one user tries synchronize at the same time. Since using this approach bypasses all the protection that mercurial has regarding locking the tree and such.