In short:
How can I use Hg to synchronize repositories between two computers using a flash drive as intermediary?
With more detail:
I often develop code on computers that aren't networked in any way, and I transfer files between these machines using a USB flash drive. Now I would like to develop some software across these machines using Hg repositories on each machine that I can frequently sync-up using the flash drive transfer mechanism.
I'm slightly familiar with Hg, as I use it in the most simple way possible for versioning only my own work on independent machines, but am uncertain as to exactly what I should do to use it to synchronize repositories between two computers using a flash drive as intermediary. Maybe, for example, I need to create a temporary repository on the flash drive (using “clone”) from which I then sync to (using “push” and “pull”), and do this by A→flash, flash→B, B→flash, flash→A? The more specificity in your answer regarding the sequence of actions and commands, the more useful to me.
Finally, how do I get this process started? Do I need to do something so Hg knows these are all part of one code base? For example, each of my current repositories on the different computers was created independently from a time before I started using Hg, and although all the code is similar, independent changes have been made to each, and the repositories know nothing about each other. If what I need to do with this is different than what I need to do for the ongoing case once I have everything unified, spelling this process out for me as well would also help.
In case it's important, these machines can be running any of Windows, Mac, or Linux, and my versions of Mercurial are slightly different on each machine (though the Mercurial versions could be unified if needed).
What you have described above in terms of using the flash drive as an intermediate storage location should work. My process would be:
initial setup
create repo on computer A (using hg init)
clone the repo from computer A to flash drive
hg clone C:/path/to/repo/A X:/path/to/flash/drive/repo
clone the repo from flash drive to computer B
hg clone X:/path/to/flash/drive/repo C:/path/to/repo/B
working process
edit/commit to repo on computer A
push from computer A to flash drive
hg push X:/path/to/flash/drive/repo
pull from flash drive to computer B
hg pull X:/path/to/flash/drive/repo
edit/commit repo on computer B
push from computer B to flash drive (same commands as above)
pull from flash drive to computer A (same commands as above)
Finally, how do I get this process
started? Do I need to do something so
Hg knows these are all part of one
code base?
Mercurial knows if two arbitrary repositories have a common ancestor by looking at the SHA1 hash keys of the commits in each repo. In other words, assuming both repos have at least one common hash key in their histories, Mercurial will attempt to merge them. In your specific case, where both repos are initially un-versioned, Mercurial will need some help. The best thing to do would be to get to a place where both repos are identical and then perform your hg init. Mercurial should handle sharing from this point on.
When working offline on different machines. It is better to use the bundle command that comes with Mercurial. So echoing what dls wrote but a slight change process.
Initial setup as mentioned by dls.
or
Go to your Mercurial repository top directory
Create bundle: hg bundle --base null ../project.hg
Copy the project.hg file to your other computer
Create a directory there
Make it an Mercurial repository : hg init
Incorporate the bundle: hg pull <path/project.hg>
hg update
Check hg log, both the repository will show same base revisions and tip
Workflow using bundle
I use a slightly different workflow. I keep these repositories as distinct repositories.
I mention them as repo1 and repo2.
Suppose that the current tip of repo1 is 4f45839f613c.
You make changes and commit them in repo1
Create a bundle of the changes :
Command : This bundle contains all changes since the specified base version.
hg bundle --base 4f45839f613c changes.bundle
Take it to repo2 by copying the bundle.
You can simply pull the bundle to repo2 :
Command :
hg pull changes.bundle
If the bundle contains changes that are already present in repo2, then these will be ignored when pulling. As long as the bundle doesn't grow to large, this allows to use the bundle command with the same --base revision again and again to create bundles including further changes.
About bundles: these are (very well) compressed.
creates a (compressed) backup of the repository
hg bundle --base null backup.bundle
[Edit : Adding some links on this topic]
http://blog.experimentalworks.net/2010/09/review-remote-changes-offline-in-mercurial/
https://www.mercurial-scm.org/wiki/Bundle
[Edit: What I think is advantage of using bundle]
Bundles can be created offline, copied or sent via mail. Using push to repo on flash drive, requires it to be connected. Bundles are easier since it does not maintain that the two repo from which you push and pull have to be available at the same time.
Apart from that, bundles can also be of two types : Changesets and Incremental. Changeset bundles are complete standalone bundles. You can also use bundles for backup as a single file.
Related
I have a collection of Mercurial repositories on a network share. To enable offline work, I want a local copy of this collection on my laptop, and an easy way to synchronize the two when I'm online.
For this, I wrote a quick script that automatically synchronizes each local repository with the corresponding remote repository (push and pull), but it's missing a couple of desirable features:
automatic cloning of new repositories from the local to the remote collection (and vice versa)
the ability to organize (move/rename) a local repository and have the change being applied on the remote side as well, the next time I synchronize
the ability to synchronize hg strip and other commands that rewrite repository history
the ability to synchronize against a hgwebdir collection or even Bitbucket
Are there any existing solutions that provide some (or all) of these features?
To my knowledge nothing like this exists. The safest way to move changesets back and forth between repositories is always hg push and hg pull and none of those command operate on more than one source or destination repository.
For backup purposes I've done something like this before:
for thedir in $(find . -type d -name .hg) ; do
repopath=$(dirname $thedir)
hg push $repopath ssh://mybackupserver//path/to/backups/$(basename $repopath)
done
which pushes all local repos to off site backups. In theory you could do both push and pull and if necessary a init/clone, but you'll start running into edge ccases pretty quickly.
A noob question... i think
I use Mercurial for my project on my laptop. How do i submit the project to an online server like codeplex?
I'm using tortoisehg and i cant find the upload interface for submit the project online...
From the command line, the command is:
hg push <url>
to push changes a remote repository.
In TortoiseHg, this is accessed through the "Synchronize" function, which seems to show up if you right-click in a Windows Explorer window but not on any file. It's also available in the workbench; the icon is 2 arrows pointing in a circle.
For these things, I find the best way to go is to use the command line interface - TortoiseHG is OK if you need to perform some common operations from the file browser, and it's a nice tool to visualize some aspects of your repository, but it doesn't implement all of mercurial's features in full detail, and it renames and bundles some operations for no apparent reason.
I don't know how things work at codeplex, but I assume it is similar to bitbucket or github, in which case here's what you'd do:
Create an empty repository on the remote end (codeplex / bitbucket / ...).
Find the remote repository's URL - for bitbucket, it is https://bitbucket.org/yourname/project, or ssh://hg#bitbucket.org/yourname/project.
From your local repository, commit all pending changes, then issue the command: hg push {remote_url}, where {remote_url} is the URL of the remote repository. This will push all committed changes from your local repository to the remote repository.
Since the remote's head revision (an empty project) is the same as the first revision in your local copy (because all hg repositories start out empty), mercurial should consider the two repositories related and accept the push.
For an introductory guide to command-line mercurial, I recommend http://hginit.com/
We are a small team of developers working with a Web Application which is published using a Web Server that is only accessible throught FTP.
Our workflow is the following one:
A developer is working out some requested feature locally
When its done, commits it and Pushes to a 'central' repository
Few times a day, one of developers publishes the files that have been changed to a testing WebSite, to let key users see how does features have been implemented.
Once in a week, we deploy to our production site
As our Webserver doesnt support SSH, we can't push changesets and update on the server, so we created a custom script which Transfers the changed files throught FTP.
Each time we use that script a new tag is created, so we know -using hg diff- the diference between tags (a release for us).
It was all fine until now, that we introduced branches in our workflow, to let a developer work on a radical changes in the code, and keep contributing in daily small changes which are published to production.
The problem is that hg diff doesnt support Branches (or seems that its still in development)
So, which would be the best way to do it ? some options we have been thinking about are:
Mounting FTP as a Volume localy (using MacFuse or similar) and use mercurial push/update But would be so sloooow.
Play around with Bundles and see if they can help us but seems quite complicated
Example
$ hg tag qa-001 /* init to see diferences QA Site */
$ hg tag prod-001 /* init to see diferences Production Site */
$ hg ci -m "working on a stable feature"
$ hg tag qa-002
$ hg ci -m "change on the stable feature"
$ hg tag qa-003
$ hg tag prod-002
$ hg ci -m "another change on stable"
$ hg pull ../CentralRepo /*Where there is another Branch with unstable files*/
With last operation, a new head is created , so now there are two heads (stable, and unstable branch)
$hg diff -r qa-003 -r tip
The Result of hg diff is showing up the Unstable Files without doing the merge
Many Thanks for your comments
In your example, you are creating tags, not (named) branches. Tags won't help you to create separate lines of development: they are just stand-alone identifiers assigned with particular revisions.
Creating branches
To start using branches, you probably want to review some tutorials, such as:
Chapter 8. Managing releases and branchy development (from Bryan O'Sullivan's book)
A Guide to Branching in Mercurial (Steve Losh)
Based on your description, you probably want to create prod and qa branches based on your current default branch, as well as any feature / topic branches you might want for radical changes.
Once you have these branches in place, it's very easy to compare them, merge between them, see what changes are pending from one to the other, and so on as your workflow demands.
Bundles
Play around with Bundles and see if they can help us but seems quite complicated
If you only have FTP access, then bundles probably won't help you. You could upload a bundle to the server via FTP, but you would need to be able to run hg on the server to unpack the bundle into a repository.
Update because of new insights: Upon seeing this question five years later, I realise that this stems from trying to use a version control system as a package manager. This of course leads to all sorts of unexpected issues, and we shouldn't be using it that way. If you're reading this question, I suggest searching for a package manager for your preferred language.
My original question: I'm currently in the process of moving from Subversion to Mercurial, and I have to say I don't regret that decision. However, when trying to convert my project, I ran into a problem of Mercurial, which I can't seem to get fixed. I have two distinct projects: one is a framework, and the other is an application that relies on that framework. Here's what the repositories look like:
The Framework repository:
docs/
deploy/
lib/
tests/
The Application repository:
application/
config/
lib/
tests/
www/
What I'd like is for the application's lib directory to contain a copy of the frameworks' lib/ directory. I used to do this using svn:externals. Now, I am aware that Mercurial supports the concept of subrepositories, but that doesn't seem like the "correct" solution, as it doesn't actually pull in the lib/ directory like I wanted, as you'll still have to pull and push changes manually. That, plus once you clone the framework repository, you'll get all of it, not just the lib/ directory. I only need the lib/ directory, not the tests, or the docs.
Now, I thought up two different solutions to this problem, but I wonder which is the best. The first solution would be to clone the framework in a different directory altogether and create symlink in the application's lib/ directory which points to the framework's lib/ directory. Putting the symlink in .hgignore should make sure all is well, I think? That means that you could edit the frameworks code, and commit that, and you could edit the application's code and commit that, too.
The other option is to have multiple repositories. The framework gets pulled as a whole, which means you'll get the docs/, deploy/, test/ etc. directories, which are not needed for usage of the framework. I thought maybe creating a repository purely for the library might be a solution, although I sincerely doubt it, as the Unit Tests are very dependant upon the library itself.
Does anyone know a decent solution for this problem?
You should put separate components in their own repositories. Then, when you create an application, you can use the convert extension to create a pullabale framework repository out of the normal one:
$ hg convert --filemap map.txt framework new-framework
with map.txt containing the renames/includes/excludes (the following one should only include the lib directory and move everything in there to the repository root):
include lib
rename lib .
From the application repo, you can now just pull the framework repo (use -f the first time, since the repositories will likely have nothing to do with each other).
$ cd project
$ hg pull -f ../new-framwork
$ hg merge
Now, when the development goes on, you just have to re-create the converted repo every time before you pull and you're good to go. We actually have a hook on our framework repository that re-creates the converted repo on every changegroup (= every push).
This way you have both work areas (app and framework) in their own repositories, while the app repo contains the complete framework history and is able to be updated by simply pulling from the converted repo.
You should separate out the libraries into a separate repository if you need to refer to only that.
Provided you actually do need to only refer to that. What is the problem with referring to the repository containing the lib directory + other stuff, other than "it doesn't feel right"?
As for "still have to pull and push manually", when you pull your main clone, it'll pull the subrepos as well, but it won't update them to a newer revision, which is a good thing, you need to do that manually, just as you should with Subversion.
I want to do the equivalent of svn export REMOTE_URL with a mercurial repository. What I want at the end is an unversioned snapshot of the repository at the remote URL, but without cloning all of the changesets over to my local machine.
Also, I want to be able to specify a tag in the remote repository to pick this from. If it's not obvious, I'm building a release management tool that pulls from a canonical mercurial repository to build a release file, and it's slow right now because some projects have large, multiple-version binary files committed.
Is this possible? How would one go about it?
Its usually easier (if the remote HG is using the hgweb interface) to just visit the repo in your browser and download a .tgz / .zip / .bz2 of the tip revision. You'll see the links if the remote HG supports this.
If you want the repository, you need all of the revisions that went into the current tip for it to be at all functional.
There are options to hg clone that allow you to fetch a repository up to a certain revision, but none (that I could find) that allow you to get just the tip revision. What you are essentially asking for is a snapshot of the repo.
Edit: To Get A Snapshot
hg clone http[s]://url.to.repo repo.hg
cd repo.hg
hg archive ../repo-snapshot
cd ..
rm -rf repo.hg
The snapshot is now in repo-snapshot.
Yes, this does entail cloning the repo first, which is why I suggested seeing if the remote hgweb supports on the fly downloads of any particular revision. If it does, your problem is solved with something like curl or wget instead of HG.
If not, its good to let the original repo 'live' since you can update it again later via hg pull, then create another snapshot of a future release. This saves having to start over from scratch when cloning, especially for large repositories with lots of changes.
Also, Linux centric, but you get the gist. Of course, replace http[s] with the desired protocol as needed.
Is there any reason you can't maintain a mirror (updated in the background however often you want) of the remote repository on your local machine, then have the release management tool on your local machine run hg archive out of the local clone as necessary? If your concern is user-responsiveness, and not total bandwidth/storage consumed, this offsets the "slow" part to where you won't see it.
Tim Post noted that if you do have the hgweb CGI interface available, you can configure it to pull compressed archives down and unpack them (and the interface is consistent enough that you could script that via wget), but if you don't, core Mercurial doesn't have a lot of tools to help you, and the developers have expressed an opposition to trying to turn Mercurial into a general rsync-type client.
If you aren't afraid of playing with unofficial add-ons, you could have a look at the FTP Extension. That will force you to push from the server, however.