I have a collection of Mercurial repositories on a network share. To enable offline work, I want a local copy of this collection on my laptop, and an easy way to synchronize the two when I'm online.
For this, I wrote a quick script that automatically synchronizes each local repository with the corresponding remote repository (push and pull), but it's missing a couple of desirable features:
automatic cloning of new repositories from the local to the remote collection (and vice versa)
the ability to organize (move/rename) a local repository and have the change being applied on the remote side as well, the next time I synchronize
the ability to synchronize hg strip and other commands that rewrite repository history
the ability to synchronize against a hgwebdir collection or even Bitbucket
Are there any existing solutions that provide some (or all) of these features?
To my knowledge nothing like this exists. The safest way to move changesets back and forth between repositories is always hg push and hg pull and none of those command operate on more than one source or destination repository.
For backup purposes I've done something like this before:
for thedir in $(find . -type d -name .hg) ; do
repopath=$(dirname $thedir)
hg push $repopath ssh://mybackupserver//path/to/backups/$(basename $repopath)
done
which pushes all local repos to off site backups. In theory you could do both push and pull and if necessary a init/clone, but you'll start running into edge ccases pretty quickly.
Related
We have a dedicated issue tracking (Redmine) machine, which has a Mercurial repository (call it "Redmine repository"). Redmine is set up to use that repository, and as far as I understand, Redmine never makes any changes to that repository. All developers (eventually) push their changes to that repository.
We also have a dedicated production machine, which can execute the code, but is not used to make any changes to the code.
We have two choices:
Set up another Mercurial repository on the production machine (call it "production repository"). When a new production release is approved, pull the changes from the Redmine repository to the production repository, and then update the local working directory to the appropriate revision from the production repository.
Reuse the existing Redmine repository on the production machine designating it a local repository for the Mercurial installation there (the Redmine repository is on the shared drive that can be easily mounted on the production machine). Whenever a new production is approved, update the local working directory to the appropriate revision from the Redmine repository.
With option #2, we get rid of an extra "pull" step (from Redmine repository to production repository), which slightly simplifies the process. But I'm not sure if it's ok that a single repository is used by two Mercurial installations as if it's local.
Any comments on this choice (or any other aspect of this setup) is appreciated!
It sounds like a bad idea. Mercurial does a really good job of keeping reads and writes to its repository atomic, but it has a harder time doing that when the repository is on a shared drive -- even if it's only one local repository using it -- because network shares (especially on Windows) don't always make things atomic that they say they do.
Ideally your repositories (both the working dir and the repository) are local when possible, and you use push/pull to get changesets to/from a network share. If that's not possible then having a single local application using the repo on the remote file system is the best idea.
If you positively want to try having two clones using the same underlying repository check out the ShareExtension, which ships with Mercurial but is for advanced users only.
Instead of trying to piggy-back, why not just put a hook like this in your redmine repository:
[hooks]
changegroup = hg push //production/clone
That will automatically push changesets that arrive in redmine to production.
A noob question... i think
I use Mercurial for my project on my laptop. How do i submit the project to an online server like codeplex?
I'm using tortoisehg and i cant find the upload interface for submit the project online...
From the command line, the command is:
hg push <url>
to push changes a remote repository.
In TortoiseHg, this is accessed through the "Synchronize" function, which seems to show up if you right-click in a Windows Explorer window but not on any file. It's also available in the workbench; the icon is 2 arrows pointing in a circle.
For these things, I find the best way to go is to use the command line interface - TortoiseHG is OK if you need to perform some common operations from the file browser, and it's a nice tool to visualize some aspects of your repository, but it doesn't implement all of mercurial's features in full detail, and it renames and bundles some operations for no apparent reason.
I don't know how things work at codeplex, but I assume it is similar to bitbucket or github, in which case here's what you'd do:
Create an empty repository on the remote end (codeplex / bitbucket / ...).
Find the remote repository's URL - for bitbucket, it is https://bitbucket.org/yourname/project, or ssh://hg#bitbucket.org/yourname/project.
From your local repository, commit all pending changes, then issue the command: hg push {remote_url}, where {remote_url} is the URL of the remote repository. This will push all committed changes from your local repository to the remote repository.
Since the remote's head revision (an empty project) is the same as the first revision in your local copy (because all hg repositories start out empty), mercurial should consider the two repositories related and accept the push.
For an introductory guide to command-line mercurial, I recommend http://hginit.com/
I am developing a web database that is already in use for about a dozen separate installations, most of which I also manage. Each installation has a fair bit of local configuration and customization. Having just switched to mercurial from svn, I would like to take advantage of its distributed nature to keep track of local modifications. I have set up each installed server as its own repo (and configured apache not to serve the .hg directories).
My difficulty is that the development tree also contains local configuration, and I want to avoid placing every bit of it in an unversioned config file. So, how do I set things up to avoid propagating local configuration to the master repo and to the installed copies?
Example: I have a long config.ini file that should be versioned and distributed. The "clean" version contains placeholders for the database connection parameters, and I don't want the development server's passwords to end up in the repositories for the installed copies. But now and then I'll make changes (e.g., new defaults) that I do need to propagate. There are several files in a similar situation.
The best I could work out so far involves installing mq and turning the local modifications into a patch (two patches, actually, with logically separate changesets). Every time I want to commit a regular changeset to the local repo, I need to pop all patches, commit the modifications, and re-apply the patches. When I'm ready to push to the master repo, I must again pop the patches, push, and re-apply them. This is all convoluted and error-prone.
The only other alternative I can see is to forget about push and only propagate changesets as patches, which seems like an even worse solution. Can someone suggest a better set-up? I can't imagine that this is such an unusual configuration, but I haven't found anything about it.
Edit: After following up on the suggestions here, I'm coming to the conclusion that named branches plus rebase provide a simple and workable solution. I've added a description in the form of my own answer. Please take a look.
From your comments, it looks like you are already familiar with the best practice for dealing with this: version a configuration template, and keep the actual configuration unversioned.
But since you aren't happy with that solution, here is another one you can try:
Mercurial 2.1 introduced the concept of Phases. The phase is changeset metadata marking it as "secret", "draft" or "public". Normally this metadata is used and manipulated automatically by mercurial and its extensions without the user needing to be aware of it.
However, if you made a changeset 1234 which you never want to push to other repositories, you can enforce this by manually marking it as secret like this:
hg phase --force --secret -r 1234
If you then try to push to another repository, it will be ignored with this warning:
pushing to http://example.com/some/other/repository
searching for changes
no changes found (ignored 1 secret changesets)
This solution allows you to
version the local configuration changes
prevent those changes from being pushed accidentally
merge your local changes with other changes which you pull in
The big downside is of course that you cannot push changes which you made on top of this secret changeset (because that would push the secret changeset along). You'll have to rebase any such changes before you can push them.
If the problem with a versioned template and an unversioned local copy is that changes to the template don't make it into the local copies, how about modifying your app to use an unversioned localconfig.ini and fallback to a versioned config.ini for missing parameters. This way new default parameters can be added to config.ini and be propagated into your app.
Having followed up on the suggestions here, I came to the conclusion that named branches plus rebase provide a simple and reliable solution. I've been using the following method for some time now and it works very well. Basically, the history around the local changes is separated into named branches which can be easily rearranged with rebase.
I use a branch local for configuration information. When all my repos support Phases, I'll mark the local branch secret; but the method works without it. local depends on default, but default does not depend on local so it can be pushed independently (with hg push -r default). Here's how it works:
Suppose the main line of development is in the default branch. (You could have more branches; this is for concreteness). There is a master (stable) repo that does not contain passwords etc.:
---o--o--o (default)
In each deployed (non-development) clone, I create a branch local and commit all local state to it.
...o--o--o (default)
\
L--L (local)
Updates from upstream will always be in default. Whenever I pull updates, I merge them into local (n is a sequence of new updates):
...o--o--o--n--n (default)
\ \
L--L--N (local)
The local branch tracks the evolution of default, and I can still return to old configurations if something goes wrong.
On the development server, I start with the same set-up: a local branch with config settings as above. This will never be pushed. But at the tip of local I create a third branch, dev. This is where new development happens.
...o--o (default)
\
L--L (local)
\
d--d--d (dev)
When I am ready to publish some features to the main repository, I first rebase the entire dev branch onto the tip of default:
hg rebase --source "min(branch('dev'))" --dest default --detach
The previous tree becomes:
...o--o--d--d--d (default)
\
L--L (local)
The rebased changesets now belong to branch default. (With feature branches, add --keepbranches to the rebase command to retain the branch name). The new features no longer have any ancestors in local, and I can publish them with push -r default without dragging along the local revisions. (Never merge from local into default; only the other way around). If you forget to say -r default when pushing, no problem: Your push gets rejected since it would add a new head.
On the development server, I merge the rebased revs into local as if I'd just pulled them:
...o--o--d--d--d (default)
\ \
L--L-----N (local)
I can now create a new dev branch on top of local, and continue development.
This has the benefits that I can develop on a version-controlled, configured setup; that I don't need to mess with patches; that previous configuration stages remain in the history (if my webserver stops working after an update, I can update back to a configured version); and that I only rebase once, when I'm ready to publish changes. The rebasing and subsequent merge might lead to conflicts if a revision conflicts with local configuration changes; but if that's going to happen, it's better if they occur when merge facilities can help resolve them.
1 Mercurial have (follow-up to comments) selective (string-based) commit - see Record Extension
2 Local changes inside versioned public files can be easy received with MQ Extension (I do it for site-configs all time). Your headache with MQ
Every time I want to commit a regular changeset to the local repo, I
need to pop all patches, commit the modifications, and re-apply the
patches. When I'm ready to push to the master repo, I must again pop
the patches, push, and re-apply them.
is a result of not polished workflow and (some) misinterpretation. If you want commit without MQ-patches - don't do it by hand. Add alias for commit, which qop -all + commit and use this new command only. And when you push, you may don't worry about MQ-state - you push changesets from repo, not WC state. Local repo can also be protected without alias by pre-commit hook checking content.
3 You can try LocalBranches extension, where your local changes stored inside local branches (and merge branches on changes) - I found this way more troublesome, compared to MQ
In short:
How can I use Hg to synchronize repositories between two computers using a flash drive as intermediary?
With more detail:
I often develop code on computers that aren't networked in any way, and I transfer files between these machines using a USB flash drive. Now I would like to develop some software across these machines using Hg repositories on each machine that I can frequently sync-up using the flash drive transfer mechanism.
I'm slightly familiar with Hg, as I use it in the most simple way possible for versioning only my own work on independent machines, but am uncertain as to exactly what I should do to use it to synchronize repositories between two computers using a flash drive as intermediary. Maybe, for example, I need to create a temporary repository on the flash drive (using “clone”) from which I then sync to (using “push” and “pull”), and do this by A→flash, flash→B, B→flash, flash→A? The more specificity in your answer regarding the sequence of actions and commands, the more useful to me.
Finally, how do I get this process started? Do I need to do something so Hg knows these are all part of one code base? For example, each of my current repositories on the different computers was created independently from a time before I started using Hg, and although all the code is similar, independent changes have been made to each, and the repositories know nothing about each other. If what I need to do with this is different than what I need to do for the ongoing case once I have everything unified, spelling this process out for me as well would also help.
In case it's important, these machines can be running any of Windows, Mac, or Linux, and my versions of Mercurial are slightly different on each machine (though the Mercurial versions could be unified if needed).
What you have described above in terms of using the flash drive as an intermediate storage location should work. My process would be:
initial setup
create repo on computer A (using hg init)
clone the repo from computer A to flash drive
hg clone C:/path/to/repo/A X:/path/to/flash/drive/repo
clone the repo from flash drive to computer B
hg clone X:/path/to/flash/drive/repo C:/path/to/repo/B
working process
edit/commit to repo on computer A
push from computer A to flash drive
hg push X:/path/to/flash/drive/repo
pull from flash drive to computer B
hg pull X:/path/to/flash/drive/repo
edit/commit repo on computer B
push from computer B to flash drive (same commands as above)
pull from flash drive to computer A (same commands as above)
Finally, how do I get this process
started? Do I need to do something so
Hg knows these are all part of one
code base?
Mercurial knows if two arbitrary repositories have a common ancestor by looking at the SHA1 hash keys of the commits in each repo. In other words, assuming both repos have at least one common hash key in their histories, Mercurial will attempt to merge them. In your specific case, where both repos are initially un-versioned, Mercurial will need some help. The best thing to do would be to get to a place where both repos are identical and then perform your hg init. Mercurial should handle sharing from this point on.
When working offline on different machines. It is better to use the bundle command that comes with Mercurial. So echoing what dls wrote but a slight change process.
Initial setup as mentioned by dls.
or
Go to your Mercurial repository top directory
Create bundle: hg bundle --base null ../project.hg
Copy the project.hg file to your other computer
Create a directory there
Make it an Mercurial repository : hg init
Incorporate the bundle: hg pull <path/project.hg>
hg update
Check hg log, both the repository will show same base revisions and tip
Workflow using bundle
I use a slightly different workflow. I keep these repositories as distinct repositories.
I mention them as repo1 and repo2.
Suppose that the current tip of repo1 is 4f45839f613c.
You make changes and commit them in repo1
Create a bundle of the changes :
Command : This bundle contains all changes since the specified base version.
hg bundle --base 4f45839f613c changes.bundle
Take it to repo2 by copying the bundle.
You can simply pull the bundle to repo2 :
Command :
hg pull changes.bundle
If the bundle contains changes that are already present in repo2, then these will be ignored when pulling. As long as the bundle doesn't grow to large, this allows to use the bundle command with the same --base revision again and again to create bundles including further changes.
About bundles: these are (very well) compressed.
creates a (compressed) backup of the repository
hg bundle --base null backup.bundle
[Edit : Adding some links on this topic]
http://blog.experimentalworks.net/2010/09/review-remote-changes-offline-in-mercurial/
https://www.mercurial-scm.org/wiki/Bundle
[Edit: What I think is advantage of using bundle]
Bundles can be created offline, copied or sent via mail. Using push to repo on flash drive, requires it to be connected. Bundles are easier since it does not maintain that the two repo from which you push and pull have to be available at the same time.
Apart from that, bundles can also be of two types : Changesets and Incremental. Changeset bundles are complete standalone bundles. You can also use bundles for backup as a single file.
I want to do the equivalent of svn export REMOTE_URL with a mercurial repository. What I want at the end is an unversioned snapshot of the repository at the remote URL, but without cloning all of the changesets over to my local machine.
Also, I want to be able to specify a tag in the remote repository to pick this from. If it's not obvious, I'm building a release management tool that pulls from a canonical mercurial repository to build a release file, and it's slow right now because some projects have large, multiple-version binary files committed.
Is this possible? How would one go about it?
Its usually easier (if the remote HG is using the hgweb interface) to just visit the repo in your browser and download a .tgz / .zip / .bz2 of the tip revision. You'll see the links if the remote HG supports this.
If you want the repository, you need all of the revisions that went into the current tip for it to be at all functional.
There are options to hg clone that allow you to fetch a repository up to a certain revision, but none (that I could find) that allow you to get just the tip revision. What you are essentially asking for is a snapshot of the repo.
Edit: To Get A Snapshot
hg clone http[s]://url.to.repo repo.hg
cd repo.hg
hg archive ../repo-snapshot
cd ..
rm -rf repo.hg
The snapshot is now in repo-snapshot.
Yes, this does entail cloning the repo first, which is why I suggested seeing if the remote hgweb supports on the fly downloads of any particular revision. If it does, your problem is solved with something like curl or wget instead of HG.
If not, its good to let the original repo 'live' since you can update it again later via hg pull, then create another snapshot of a future release. This saves having to start over from scratch when cloning, especially for large repositories with lots of changes.
Also, Linux centric, but you get the gist. Of course, replace http[s] with the desired protocol as needed.
Is there any reason you can't maintain a mirror (updated in the background however often you want) of the remote repository on your local machine, then have the release management tool on your local machine run hg archive out of the local clone as necessary? If your concern is user-responsiveness, and not total bandwidth/storage consumed, this offsets the "slow" part to where you won't see it.
Tim Post noted that if you do have the hgweb CGI interface available, you can configure it to pull compressed archives down and unpack them (and the interface is consistent enough that you could script that via wget), but if you don't, core Mercurial doesn't have a lot of tools to help you, and the developers have expressed an opposition to trying to turn Mercurial into a general rsync-type client.
If you aren't afraid of playing with unofficial add-ons, you could have a look at the FTP Extension. That will force you to push from the server, however.