mercurial: automatically pulling an external library under development - mercurial

I work on project1 which uses module1 developed independently. They both reside on the same mercurial server (development for project1 and module1 is done locally, then commits are pushed to a server. Both are separate, independent repositories).
How can I ensure that, when pulling for changes in project1, I also pull module1 (with a warning if my local module1 requires a merge (in the case I would have had worked on module1 in the meantime))?

You can look at attaching module1 as a subrepo to project1, but this won't work exactly the way you want it to. This allows you to develop module1 as its own repo, but when developing project1, each changeset will be stamped with the changeset of module1 in the subrepo at the time of commit.
https://www.mercurial-scm.org/wiki/Subrepository
http://mercurial.aragost.com/kick-start/en/subrepositories/
So this won't guarantee that you have the latest version of module1 whenever you pull project1, but it will guarantee that you at least have module1 up to the version that was used with a particular version of project1.
Alternatively, since by default mercurial doesn't support what you are asking exactly,you can write your own hook for the pull command (a hook is a script that gets triggered when you do the action it is hooked to), which reads the .hgsub command and does a recursive pull and update of all subrepos.
https://www.mercurial-scm.org/wiki/Hook
You can probably accomplish this without subrepos, but in that case I'd advise you to have some file commited to project1 that your hook can read, which tells it what repos to pull. You should be able to use the normal output from pulls, or possibly run hg advice, to give feedback about needing to merge.

Related

Mercurial: create local copy of a remote repository at the remote respository

I use Mercurial on desktops, and then push local repositories to a centralized server. I noticed that this remote server does not hold local copies of files in its repositories (the directory is empty, except obviously for the .hg one).
What is the preferred way to populate these directories with local copies? (which in turn are used by various unrelated services on that server).
What I came up so far is to use a hook and hg archive to create a local copy. This would be a satisfactory solution but I need to configure a per-repository hgrc file (which is tedious but I did not find a way to centralize this in /etc/mercurial/hgrc). Maybe a global script (in /etc/mercurial/hgrc, run for each changegroup event)? (in that case how can I get the repository name to use in a if...then scenario?)
If you can get access to the remote repository, you could install a hook for when changegroups come in, and perform an hg update when that happens.
A quick check shows this in the FAQ (question 4.21), but to summarize/duplicate: edit the .hg/hgrc file on the remote repository, and add the following lines:
[hooks]
changegroup = hg update
Whenever the remote repository gets pushed to (or when it performs a pull), it will update to the latest changeset.
Some caveats - this may fail if any changes have been made to the files on the remote side (you could use hg update -C instead). Also, if you have pushed any anonymous branches (which you would have to consciously force), you may not update to what you want to update to.

Move Mercurial Repository To Another Server

We have a project that lives in a mercurial repository.
Our customer would like to take ownership of the code base by doing the following:
Set up a mercurial repository on a server belonging to the customer.
Import the existing code into the new mercurial repository.
What is the best way to achieve step 2?
Is it a simple matter of doing the following:
Clone the existing mercurial repository:
hg clone <existing mercurial repo URL>
Push the cloned repository into the new one:
hg push <new mercurial repo URL>
Am I missing any steps? What about the hgrc file? Does it have to be modified in any way prior to pushing the project into a new repository?
Yes, you can do what you state, however it is worth noting that if you do a simple hg clone of your main repository, then a link will exist between the two, which may not be what you want. You can remove this link by editing the .hg/hgrc file and removing the default = ... item in the [paths] section.
I find that a better way is to do it without cloning. This way you don't have the link between repositories, which as this is going to a customer may be what you want.
The basic method is to set up a new repository with no changesets, and then bring in all of the changesets in one of three ways:
Push the changes from your repository to the new repository.
Pull the changes from into the new repository from the old.
If you don't have access to the new repository, create a bundle that can be provided to the customer - this can then be unbundled or pulled into the empty repo.
Pushing and Pulling is done as you normally would, but specifying the repository location:
// create the empty repository
hg init .
// pull in everything from the old repo
hg pull /projects/myOriginalRepo
or to push...
// create the empty repository
hg init /projects/myNewRepo
cd /projects/myOriginalRepo
hg push /projects/myNewRepo
Creating a bundle is perhaps a nicer way, as you can write the bundle onto a DVD and give it to your customer wrapped in a bow with a nice greeting card:
cd /projects/myOriginalRepo
hg bundle --all ../repo.bundle
Everything gets written out to a single file, which can then be extracted with hg unbundle repo.bundle or hg pull repo.bundle, into a repository with no existing changesets.
Regarding the hgrc file, as already mentioned in another answer it is not a controlled file, and so won't be copied. However, any contents are likely things like hooks to perform auto-building, or validating changesets before they are applied. This is logic which would probably only make sense to your own organisation, and I would suggest you wouldn't want this to be imposed on your customer - they are, after all, taking ownership of your code-base, and may have their own systems in place for things like this.
In the simple case - it's all.
But if you have modified .hg/hgrc file then you need to move it to the remote server manually and (if necessary) modify it correspondingly to a new environment.
Ie: you could have hooks set up in the original repository.
As of clients - just change a path to a repository in a default section (or any other section if you have several specified)
To move the master repository, you need to (a) create the new master repo and (b) tell the existing clients about it.
Create the new master repo any way you want: cloning or init+pushing, it makes little difference. Be sure to move any contents of the old repo that are not under version control, including .hgrc and any unversioned or ignored files that are not discardable. If you cloned, edit the new master's .hgrc and remove the default path, so that it doesn't try to talk to the old master repo any more.
Existing clones of the old master repo still push/pull from the old master. Everyone must edit their .hgrc, updating default (and/or default-push) so that it points to the new location. (They may also need to update authentication credentials, of course).
Only then is the migration complete. Remove (or move/hide) the original repo so that if someone forgot to update their repo path, they'll get an error on push/pull instead of pouring data down a memory hole.

Mq on a subrepo without write access

I have a dependancy as a subrepository (without write access to) in my project.
I'd like to add a few personal customizations to that subrepository - possibly using mq.
I also would love to be able to just clone the main repo to build it. Currently I have to:
clone the repo - with subrepositories getting cloned automagically
manually clone all the patchqueues for subrepositories
How do I get rid of step 2? Is it even possible without an outside script? (I'm using bitbucket if it makes any difference).
One notion is to make the subrepo not the repo to which you have no write access, but a clone of your own based on their repo.
cd myclones
hg clone http://notmydomain.com/their-repo my-clone-of-their-repo
and in your project's .hg/hgrc you use a [subpaths] section to map their URL to your local clone:
[subpaths]
http://notmydomain.com/their-repo = ../my-clone-of-their-repo
Then you end up with your repo using your local (read-write) clone of their repo to which you otherwise have read-only access. This has a few benefits:
faster -- you're only checking local repositories for all actions
writeable -- you can edit directoy in myproject/their-repo and commit and push (to your local clone)
And when you want to merge in their upstream changes you just go into ../my-clone-of-their-repo and hg pull and hg merge their updates into your customizations.

How can I keep some modifications from propagating in mercurial?

I am developing a web database that is already in use for about a dozen separate installations, most of which I also manage. Each installation has a fair bit of local configuration and customization. Having just switched to mercurial from svn, I would like to take advantage of its distributed nature to keep track of local modifications. I have set up each installed server as its own repo (and configured apache not to serve the .hg directories).
My difficulty is that the development tree also contains local configuration, and I want to avoid placing every bit of it in an unversioned config file. So, how do I set things up to avoid propagating local configuration to the master repo and to the installed copies?
Example: I have a long config.ini file that should be versioned and distributed. The "clean" version contains placeholders for the database connection parameters, and I don't want the development server's passwords to end up in the repositories for the installed copies. But now and then I'll make changes (e.g., new defaults) that I do need to propagate. There are several files in a similar situation.
The best I could work out so far involves installing mq and turning the local modifications into a patch (two patches, actually, with logically separate changesets). Every time I want to commit a regular changeset to the local repo, I need to pop all patches, commit the modifications, and re-apply the patches. When I'm ready to push to the master repo, I must again pop the patches, push, and re-apply them. This is all convoluted and error-prone.
The only other alternative I can see is to forget about push and only propagate changesets as patches, which seems like an even worse solution. Can someone suggest a better set-up? I can't imagine that this is such an unusual configuration, but I haven't found anything about it.
Edit: After following up on the suggestions here, I'm coming to the conclusion that named branches plus rebase provide a simple and workable solution. I've added a description in the form of my own answer. Please take a look.
From your comments, it looks like you are already familiar with the best practice for dealing with this: version a configuration template, and keep the actual configuration unversioned.
But since you aren't happy with that solution, here is another one you can try:
Mercurial 2.1 introduced the concept of Phases. The phase is changeset metadata marking it as "secret", "draft" or "public". Normally this metadata is used and manipulated automatically by mercurial and its extensions without the user needing to be aware of it.
However, if you made a changeset 1234 which you never want to push to other repositories, you can enforce this by manually marking it as secret like this:
hg phase --force --secret -r 1234
If you then try to push to another repository, it will be ignored with this warning:
pushing to http://example.com/some/other/repository
searching for changes
no changes found (ignored 1 secret changesets)
This solution allows you to
version the local configuration changes
prevent those changes from being pushed accidentally
merge your local changes with other changes which you pull in
The big downside is of course that you cannot push changes which you made on top of this secret changeset (because that would push the secret changeset along). You'll have to rebase any such changes before you can push them.
If the problem with a versioned template and an unversioned local copy is that changes to the template don't make it into the local copies, how about modifying your app to use an unversioned localconfig.ini and fallback to a versioned config.ini for missing parameters. This way new default parameters can be added to config.ini and be propagated into your app.
Having followed up on the suggestions here, I came to the conclusion that named branches plus rebase provide a simple and reliable solution. I've been using the following method for some time now and it works very well. Basically, the history around the local changes is separated into named branches which can be easily rearranged with rebase.
I use a branch local for configuration information. When all my repos support Phases, I'll mark the local branch secret; but the method works without it. local depends on default, but default does not depend on local so it can be pushed independently (with hg push -r default). Here's how it works:
Suppose the main line of development is in the default branch. (You could have more branches; this is for concreteness). There is a master (stable) repo that does not contain passwords etc.:
---o--o--o (default)
In each deployed (non-development) clone, I create a branch local and commit all local state to it.
...o--o--o (default)
\
L--L (local)
Updates from upstream will always be in default. Whenever I pull updates, I merge them into local (n is a sequence of new updates):
...o--o--o--n--n (default)
\ \
L--L--N (local)
The local branch tracks the evolution of default, and I can still return to old configurations if something goes wrong.
On the development server, I start with the same set-up: a local branch with config settings as above. This will never be pushed. But at the tip of local I create a third branch, dev. This is where new development happens.
...o--o (default)
\
L--L (local)
\
d--d--d (dev)
When I am ready to publish some features to the main repository, I first rebase the entire dev branch onto the tip of default:
hg rebase --source "min(branch('dev'))" --dest default --detach
The previous tree becomes:
...o--o--d--d--d (default)
\
L--L (local)
The rebased changesets now belong to branch default. (With feature branches, add --keepbranches to the rebase command to retain the branch name). The new features no longer have any ancestors in local, and I can publish them with push -r default without dragging along the local revisions. (Never merge from local into default; only the other way around). If you forget to say -r default when pushing, no problem: Your push gets rejected since it would add a new head.
On the development server, I merge the rebased revs into local as if I'd just pulled them:
...o--o--d--d--d (default)
\ \
L--L-----N (local)
I can now create a new dev branch on top of local, and continue development.
This has the benefits that I can develop on a version-controlled, configured setup; that I don't need to mess with patches; that previous configuration stages remain in the history (if my webserver stops working after an update, I can update back to a configured version); and that I only rebase once, when I'm ready to publish changes. The rebasing and subsequent merge might lead to conflicts if a revision conflicts with local configuration changes; but if that's going to happen, it's better if they occur when merge facilities can help resolve them.
1 Mercurial have (follow-up to comments) selective (string-based) commit - see Record Extension
2 Local changes inside versioned public files can be easy received with MQ Extension (I do it for site-configs all time). Your headache with MQ
Every time I want to commit a regular changeset to the local repo, I
need to pop all patches, commit the modifications, and re-apply the
patches. When I'm ready to push to the master repo, I must again pop
the patches, push, and re-apply them.
is a result of not polished workflow and (some) misinterpretation. If you want commit without MQ-patches - don't do it by hand. Add alias for commit, which qop -all + commit and use this new command only. And when you push, you may don't worry about MQ-state - you push changesets from repo, not WC state. Local repo can also be protected without alias by pre-commit hook checking content.
3 You can try LocalBranches extension, where your local changes stored inside local branches (and merge branches on changes) - I found this way more troublesome, compared to MQ

Storing separate named branches in mercurial without having to merge them

It's my first time using a DVCS and also as a lone developer, the first time that I've actually used branches, so maybe I'm missing something here.
I have a remote repository from which I pulled the files and started working. Changes were pushed to the remote repository and of course this simple scenario works fine.
Now that my web application has some stable features, I'd like to start deploying it and so I cloned the remote repository to a new branches/stable directory outside of my working directory for the default branch and used:
hg branch stable
to create a new named branch. I created a bunch of deployment scripts that are needed only by the stable branch and I committed them as needed. Again this worked fine.
Now when I went back to my initial working directory to work on some new features, I found out that Mercurial insists on only ONE head being in the remote repository. In other words, I'd have to merge the two branches (default and stable), adding in the unneeded deployment scripts to my default branch in order to push to the main repository. This could get worse, if I had to make a change to a file in my stable branch in order to deploy.
How do I keep my named branches separate in Mercurial? Do I have to create two separate remote repositories to do so? In which case the named branches lose their value. Am I missing something here?
Use hg push -f to force the creation of a new remote head.
The reason push won't do it by default is that it's trying to remind you to pull and merge in case you forgot. What you don't want to happen is:
You and I check out revision 100 of named branch "X".
You commit locally and push.
I commit locally and push.
Now branch X looks like this in the remote repo:
--(100)--(101)
\
\---------(102)
Which head should a new developer grab if they're checking out the branch? Who knows.
After re reading the section on named branchy development in the Mercurial book, I've concluded that for me personally, the best practice is to have separate shared repositories, one for each branch. I was on the free account at bitbucket.org, so I was trying to force myself to use only one shared repository, which created the problem.
I've bit the bullet and got myself a paid account so that I can keep a separate shared repository for my stable releases.
You wrote:
I found out that Mercurial insists on only ONE head being in the remote repository.
Why do you think this is the case?
From the help for hg push:
By default, push will refuse to run if it detects the result would
increase the number of remote heads. This generally indicates the
the client has forgotten to pull and merge before pushing.
If you know that you are intentionally creating a new head in the remote repository, and this is desirable, use the -f flag.
I've come from git expecting the same thing. Just pushing the top looks like it might be one approach.
hg push -r tip