Modify behavior of core Mercurial commands using extensions? - mercurial

Is it possible to modify the behavior of a core Mercurial command (e.g. hg commit or hg status) by creating an extension?
For example, would it be possible to modify hg commit to ask the user to enter an issue tracking ID?
I understand that hook scripts could be used, but such scripts are not distributed via hg pull and need to be configured for every repository used.

Answering my own question
The Mercurial API provides the extensions.wrapcommand(table, command, wrapper) method which seems to provide the desired feature.
From the source code:
Wrap the command named `command' in
table.
Replace command in the command table
with wrapper. The wrapped command will
be inserted into the command table
specified by the table argument. The
wrapper will be called like
wrapper(orig, *args, **kwargs) where
orig is the original (wrapped)
function, and *args, **kwargs are the
arguments passed to it.
A couple examples:
The prompt extension by Steve Losh
The nobranch extension from Fog Creek

Just to note: both extensions and hooks have exactly the same mass-deployment limitations. In both cases you have to induce your internal users to download a piece of software, be it a hook or an extension, and then to enable it in either the hgrc in their homedir or within the repo.
For both hooks and extensions you can distribute the software using any mechanism and can enable them globally in /etc/mercurial/hgrc
Extensions have some advantages over hooks as to how deep they can dig in the mercurial internals, but deployment is identical.

Related

Edit repo config file from extension

Is there any built in mechanism for editing a repository's hgrc config file using the Mercurial API? I'm writing an extension that requires storing some options in the config file, and I'd like to provide a command for do so (the options that need to be stored involve timestamps and would be a little tricky for users to edit manually).
The Mercurial codebase does not provide any automated way to edit hgrc files except when they're first created by a clone operation and then only to set the paths.default setting to the origin.

Mercurial build on push

I'd like for a build to be done (on the server) each time a push is made to our central Mercurial repository.
Builds are usually kicked off on our build server by running a Visual Build file either manually or via a scheduled task.
What are the ways of achieving this?
Simple, low impact solutions are preferred.
As Pablo suggested, you can do this with a hook, but you'll need an incoming hook on the server side. This hook runs "after a changeset has been pulled, pushed, or unbundled into the local repository" (hgrc manpage).
Edit the .hg/hgrc file of the repository located on the server and define your build hook as follows:
[hooks]
incoming = /path/to/executable-build-script
Of course, the build script called here just needs to be a trigger for whatever build process you actually use.
Note that an incoming hook runs for every single changeset in a push. If you don't want this, use a changegroup hook -- it runs only once for each push, no matter how many changesets it carries.
Another way, in addition to the hooks that Pablo mentions, is to set up a continuous integration server, like TeamCity. Then you could ask TeamCity to monitor your repository, pull new changesets and start the visual build script for you.
Disclaimer
These findings are for tortoisehg client and mercurial server behind apache on win32.
Try #1
The naive solution would be to make your push kick off the build.
In .hg\hgrc
[hooks]
incoming=.hg\build.py
In build.py
os.system('\Progra~2\Micros~2.0\Common7\IDE\devenv /build release project.sln > logfile')
Problem
What you'll find is that, after a push, the tortoise hg client won't return until your os.system call returns. This may or not be acceptable. In my shop a build took about 20 minutes, and my boss deemed that unacceptable.
Try #2
My solution was for the hook to return immediately after creating a REQUESTBUILD file to the root directory.
In .hg\hgrc
[hooks]
incoming = .hg\write_buildrequest_file.bat
In .hg\write_buildrequest_file.bat
echo REQUESTBUILD > \REQUESTBUILD
Meanwhile, I had a python script running in an infinite loop, checking for the presence of REQUESTBUILD.
In .hg\monitor_buildrequest_file.py
import popen2, time, os
import subprocess
while True:
if os.path.exists("\REQUESTBUILD"):
os.system("del \REQUESTBUILD")
os.chdir("/yourrepo/.hg")
retcode = subprocess.call("\python27\python.exe build.py")
else:
time.sleep(10)
build.py would generate an HTML file of results, which the submitter would have to pull via their web browser.
There are other issues (pushes while a build is commencing, saving historical results, building out of the working directory vs copying elsewhere) but this is the general idea.
You need to handle repository events with hooks.
So, after commit event you need to run a script that will perform your build accordingly.

Using Mercurial via a USB flash drive

In short:
How can I use Hg to synchronize repositories between two computers using a flash drive as intermediary?
With more detail:
I often develop code on computers that aren't networked in any way, and I transfer files between these machines using a USB flash drive. Now I would like to develop some software across these machines using Hg repositories on each machine that I can frequently sync-up using the flash drive transfer mechanism.
I'm slightly familiar with Hg, as I use it in the most simple way possible for versioning only my own work on independent machines, but am uncertain as to exactly what I should do to use it to synchronize repositories between two computers using a flash drive as intermediary. Maybe, for example, I need to create a temporary repository on the flash drive (using “clone”) from which I then sync to (using “push” and “pull”), and do this by A→flash, flash→B, B→flash, flash→A? The more specificity in your answer regarding the sequence of actions and commands, the more useful to me.
Finally, how do I get this process started? Do I need to do something so Hg knows these are all part of one code base? For example, each of my current repositories on the different computers was created independently from a time before I started using Hg, and although all the code is similar, independent changes have been made to each, and the repositories know nothing about each other. If what I need to do with this is different than what I need to do for the ongoing case once I have everything unified, spelling this process out for me as well would also help.
In case it's important, these machines can be running any of Windows, Mac, or Linux, and my versions of Mercurial are slightly different on each machine (though the Mercurial versions could be unified if needed).
What you have described above in terms of using the flash drive as an intermediate storage location should work. My process would be:
initial setup
create repo on computer A (using hg init)
clone the repo from computer A to flash drive
hg clone C:/path/to/repo/A X:/path/to/flash/drive/repo
clone the repo from flash drive to computer B
hg clone X:/path/to/flash/drive/repo C:/path/to/repo/B
working process
edit/commit to repo on computer A
push from computer A to flash drive
hg push X:/path/to/flash/drive/repo
pull from flash drive to computer B
hg pull X:/path/to/flash/drive/repo
edit/commit repo on computer B
push from computer B to flash drive (same commands as above)
pull from flash drive to computer A (same commands as above)
Finally, how do I get this process
started? Do I need to do something so
Hg knows these are all part of one
code base?
Mercurial knows if two arbitrary repositories have a common ancestor by looking at the SHA1 hash keys of the commits in each repo. In other words, assuming both repos have at least one common hash key in their histories, Mercurial will attempt to merge them. In your specific case, where both repos are initially un-versioned, Mercurial will need some help. The best thing to do would be to get to a place where both repos are identical and then perform your hg init. Mercurial should handle sharing from this point on.
When working offline on different machines. It is better to use the bundle command that comes with Mercurial. So echoing what dls wrote but a slight change process.
Initial setup as mentioned by dls.
or
Go to your Mercurial repository top directory
Create bundle: hg bundle --base null ../project.hg
Copy the project.hg file to your other computer
Create a directory there
Make it an Mercurial repository : hg init
Incorporate the bundle: hg pull <path/project.hg>
hg update
Check hg log, both the repository will show same base revisions and tip
Workflow using bundle
I use a slightly different workflow. I keep these repositories as distinct repositories.
I mention them as repo1 and repo2.
Suppose that the current tip of repo1 is 4f45839f613c.
You make changes and commit them in repo1
Create a bundle of the changes :
Command : This bundle contains all changes since the specified base version.
hg bundle --base 4f45839f613c changes.bundle
Take it to repo2 by copying the bundle.
You can simply pull the bundle to repo2 :
Command :
hg pull changes.bundle
If the bundle contains changes that are already present in repo2, then these will be ignored when pulling. As long as the bundle doesn't grow to large, this allows to use the bundle command with the same --base revision again and again to create bundles including further changes.
About bundles: these are (very well) compressed.
creates a (compressed) backup of the repository
hg bundle --base null backup.bundle
[Edit : Adding some links on this topic]
http://blog.experimentalworks.net/2010/09/review-remote-changes-offline-in-mercurial/
https://www.mercurial-scm.org/wiki/Bundle
[Edit: What I think is advantage of using bundle]
Bundles can be created offline, copied or sent via mail. Using push to repo on flash drive, requires it to be connected. Bundles are easier since it does not maintain that the two repo from which you push and pull have to be available at the same time.
Apart from that, bundles can also be of two types : Changesets and Incremental. Changeset bundles are complete standalone bundles. You can also use bundles for backup as a single file.

Mercurial - How to stop tracking modified file but keep the first version in repository

I create the hg repository with my source tree. I want to keep the first version of some files such as Makefile in the repository and then hg don't see it modified even through I modified it.
Original problem is that ./configure usually modifies the Makefile but I don't want the build files to committed in the repository. So I want to keep only first version of configure and Makefile in the repository so that everybody who clone my repository can run ./configure by themself and not bother the repository
I tried hg remove or hg forget but those are stop tracking and also delete the files in the next revision of reporitory.
.hgignore doesn't do the things too.
I think of hg revert everytimes I run ./configure or make but it's not efficient way.
Are there any better ways?
Its usually good form to not track the configure script at all. There are some reasons for this:
Its huge. I've seen code bases where the configure script and helper macro libraries were more than ten times the size of the actual code being compiled.
When other developers make changes to configure.in(.ac), they are going to need to commit a new configure script. If three people do that, there's a good chance that Mercurial will require at least one of them to manually resolve a merge conflict in configure itself. Keep in mind, configure is machine generated, attempting to read it (much less resolve merge conflicts) may make your eyes bleed.
Generally, you'll offer a program in source form via two methods:
Download of a release archive (e.g. foo-1.2.3-rc2.zip), this can contain the configure script.
Downloading the repository directly using Mercurial. If they want to work with that, they'll need to have autoconf installed.
In the root of my repositories, I usually include a file called autogen.sh that runs all of the steps needed (aclocal, autoconf, ...), which also handles alerting the user if they need something installed. I.e. Could not find tool aclocal, please install the autoconf package.
Its really best to just go with the autogen.sh method. This means only tracking configure.in (or configure.ac) and the associated Makefiles (from Makefile.in). Let each build configure their own, and provide a distclean target to remove all files configure generates. Finally, provide a maintainer-clean target to remove anything that the configuration suite itself generated, e.g. configure.
That should help make nightly builds easy.
You could try and setup a pre-commit hook which would always restore the original Makefile content if found in the changeset.
The SO question illustrates reading the content of the changeset to be committed.
Make sure to use the pre-commit hook, and not precommit.

How can I integrate a bitbucket repository with the hosted on-demand version of FogBugz?

I use the on-demand (hosted) version of FogBugz. I would like to start using Mercurial for source control. I would like to integrate FogBugz and a BitBucket repository.
I gave it a bit of a try but things weren't going very well.
FogBugz requires that you hook up your Mercurial client to a fogbugz.py python script. TortoiseHg doesn't seem to have the hgext directory that they refer to in instructions.
So has anyone successfully done something similar?
Post-mortem:
Bitbucket now has native fogbugz support, as well as other post-back services.
http://www.bitbucket.org/help/service-integration/
From the sounds of it you are wanting to run the hook on your local machine. The hook and directions are intended for use on the central server.
If you are the only one working in your repository or don't mind commit not showing up in FB until after you do a pull, then you can add the hook locally to your primary clone, If you are using your primary clone then you need to do something slightly different from what they say here:
http://bugs.movabletype.org/help/topics/sourcecontrol/setup/Mercurial.html
You can put your fogbugz.py anywhere you want, just add a path line to your [fogbugz] section of that repositories hgrc file:
[fogbugz]
path=C:\Program Files\TortoiseHg\scripts\fogbugz.py
Just make sure you have python installed. you may also wish to add a commit hook so that local commits to the repository also get into FB.
[hooks]
commit=python:hgext.fogbugz.hook
incoming=python:hgext.fogbugz.hook
On the Fogbugz install you will want change put the following in your for your logs url:
^REPO/log/^R2/^FILE
and the following for your diff url:
^REPO/diff/^R2/^FILE
When the hook script runs it connects to your FB install and sends it a few parameters. These parameters are stored in the DB and used to generate urls for diffs and log informaiton. The script sends the url of repo, this is in your baseurl setting in the [web] section. You want this url to be the url to your bitbucket repository. This will be used to replace ^REPO from the url templates above. The hook script also passes the revision id and the file name to FB. These will replace ^R2 and ^FILE. So in summary this is the stuff you want to add to the hgrc file in your .hg directory:
[extensions]
hgext.fogbugz=
[fogbugz]
path=C:\Program Files\TortoiseHg\scripts\fogbugz.py
host=https://<YOURACCOUNT>.fogbugz.com/
script=cvsSubmit.asp
[hooks]
commit=python:hgext.fogbugz.hook
incoming=python:hgext.fogbugz.hook
[web]
baseurl=http://www.bitbucket.org/<YOURBITBUCKETACCOUNT>/<YOURPROJECT>/
One thing to remember is that FB may get notified of a checkin before you actually push those changes to bitbucket. If this is the cause do a push and things will work.
EDIT: added section about the FB server and the summary.
Just a heads-up: Fog Creek has released Kiln which provides Mercurial hosting that's tightly integrated with FogBugz and doesn't require any configuration.
I normally wouldn't "advertise" on Stack Overflow (disclaimer: I'm one of the Kiln devs), but I feel that this directly answers the original question.
It is possible to integrate your GIT BitBucket repository with FogBugz issue tracker, but unfortunately it is not properly documented.
You have to follow steps described at https://confluence.atlassian.com/display/BITBUCKET/FogBugz+Service+Management, but beware that
In CVSSubmit URL you need to put url WITHOUT "?ixBug=bugID&sFile=file&sPrev=x&sNew=y&ixRepository=" parameters.
It should just be "https://your_repo.fogbugz.com/cvsSubmit.asp"
You will need to mention your FogBugz case ID in the git commit message
by putting "BugzID: ID" string in it (this is not documented
anywhere :-( ) similar to this:
git commit -m "This is a superb commit which solves case BugzID: 42"
Of course, commit info will be sent to FogBugz after you push your commit to BitBucket server, not after your do a local commit.