There can be:
1) just clone from remote repo as needed (each new one can take 20 minutes and 500MB)
2) clone 2 local ones from remote repo, both 500MB, total 1GB, so always have 2 local repo to work with
3) clone 1 local one from remote repo, called it 'master', and then don't touch this master, but clone other local ones from this master as needed
I started off using (1), but when there is a quick bug fix, I need to do a clone and it is 20 minutes, so then method (2) is better, because there are 2 independent local repos all the time.
But then sometimes a repo becomes "weird" because there are merges that do damages and when it is fixed on the remote repo, any local repo's merge that shows up in hg outgoing will cause damage later when we push again, so we just remove that local repo and clone from remote again to start "fresh", taking 20 minutes again. (Actually, we can use local repo 2 first, rename local repo 1 as repo_old, and then before sleep or before going home, do a clone again)
Is (3) the best option? Because on a Mac, the master takes 500MB and 20 minutes, but the other local clones are super fast and takes much less than 500MB because it uses hard link on a Mac (how to find out how much disk space without the hard linked content?). And if using (3), how do we do commits and push? Suppose we clone from remote repo to local as "master", and then clone local ones as "clone01", "clone02", 03, etc, then do we work inside of clone01, and then when an urgent fix is needed, we go to master, do an hg pull, and hg update, and go to clone02 and also do hg pull and hg update, and fix it on clone02, test it, and hg commit, hg push to the master, and then go to master, and do an hg push there? And then when clone01's project is done, again go to master, pull, update, go to clone01, pull, update, merge, test, commit, push, go to master, push to remote repo? That's a lot of steps!
Maybe a fourth option might work better in your case: Mercurial Queues that are kept in a local Mercurial repository.
Using MQ you can:
Clone the master repository locally.
Work on your code and keep your changes isolated in patches.
When new updates from upstream are available, remove your batches, apply the updates, and then re-apply your patches on top of the new update.
Once you're happy with your work, fold it into your local repository and push it upstream.
You don't have to keep the patches in a local repository, but it's a nice bonus option that is worth considering.
Chapter 12 from Mercurial: The Definitive guide explains the process in fairly good detail.
I don't know that your understanding of the space considerations are correct. When cloning a local repository Mercurial will use hardlinks for the .hg directory, the actual repository, which takes up no additional space. The working directory takes up space (though hopefully not 500GB!) but the .hg directory only looks like it does depending on the tools you use to check.
If you do a clone -U you create a clone without a working directory and it should take up almost no additional space and be created almost instantly.
I always keep a clone -U of the central repo in an unmodified state and then create clones off of that as needed. I push directly from those clones back to the remote repository.
Mercurial Queues look really powerful, but I've never given myself the time
to read all that documentation, just to be able to put my current work aside to
work an a small bug.
I use the attic extension.
It'll be like this:
...working happily, but then there is a quick bug fix...
$hg shelve work
...quickly fix the bug...
$hg ci
$hg unshelve
...continue with work
Sometimes I get an idea, but no time to really play with it. To prevent me from forgetting it.
...working happily, idea drops in...
$hg shelve work
...start a unittest for the idea or some other unfinished piece of code, enough to sketch the idea
$hg shelve idea
$hg unshelve work
...continue with work
$hg ls
idea
*C work
Related
Quite often in Mercurial, I happen to need local changes to a local repository, which should never ever enter the main repository. This could be (not complete list)
config files which need to differ on my PC
I want to mark my compilations so that I can distinguish versions compiled by me from official compilations
hgsub needs to differ on a PC without network access
Using TortoiseHG, my work scheme looks like this at the moment:
Commit all relevant changes, leaving the changes to kept local uncommited
Shelve changes to be kept local
Push or Pull
Deshelve
This works until I forget to exclude the changes to kept local,
which will happen sooner or later...
Then I have to waste time to restore the state before.
Is there any way to do this, e.g. by certain extensions?
Thanks for you help
You can use the secret hg phase for this.
commit your local changes
run hg phase -sf .
This will mark the current change secret. Secret changesets will not push.
A caveat with secret phases:
All secret changesets must always be on top of the stack. When you pull changes, you will need to rebase secret changes back to the top. This is typically as easy as hg rebase.
I am working with Hg and TortoiseHg on a project and pushing every couple of days to a remote repo on Bitbucket. When I tried to push changes today, I got an error saying that I was trying to create a new head. I thought this was odd since I am definitely the only person working on the project and I work from one PC.
I pulled to see what was going on on the remote repo and after pulling the local repo tree looks like so:
At the bitbucket end the repo looks like this:
Can someone help me understand why I got two heads if I'm the only one working on the project and why Hg is not recognising that Rev.40 and Rev.36 are the same revision?
How do I fix this now? If I strip 40 locally, what will happen when I try to push changes to the remote repo? Will it strip the revision at the remote repo too?
you can try this (on a clone repo if you prefer to be sure)
having a clean workgin directory
hg co 40
hg backout -r 40
hg merge 39
hg push
revision 40 would be the one that exists in the remote repo, before the amendment
so, you check it out, you back it out (put the inverse on top of it) then you merge your ongoing work (left at 39) and there should be no merge conflicts at all, since all changes are incoming
then, when satisfied, you push
===
why I got two heads
this part was already addressed in the comments, you realized you amended the commit after pushing it, thus the apparent duplicate of it
How do I fix this now?
you do a merge on your local repo, to get rid of the two heads, so having only one the remote repo won't complain about it
if you like, you can backout the before-amendment commit to be safe (or not, if you really know how to merge the conflicts making your local prevail)
but in both these cases, the non-amended commit survives, therefore it'll show up in your history, beware
If I strip 40 locally, what will happen when I try to push changes to the remote repo?
is gonna be there unless you strip it there (directly on the remote repo)
Will it strip the revision at the remote repo too?
no, it won't
As commit r40 is only present locally, there is no drawback in stripping it from your local repository - IF you are sure that it doesn't contain changes you want to keep. It won't propagate to your bitbucket repo then.
However, the two commits are not the same in the sense of the repo - they probably differ at least by a small time difference (if not some content, e.g. white space) - thus they are unique; only you can tell how they came into existence.
I have a mercurial repository my_project, hosted at bitbucket. Today I made a number of changes and commited them to my local repository, but didn't push them out yet.
I then majorly stuffed up and fatfingered rm -rf my_project (!!!!!).
Is there some way I can retrieve the changes that I committed today, given that I hadn't pushed them out yet? I know a day's worth of commits doesn't sound like much, but it was!
All the other clones I have of this project are only up-to-date to the most recent push (which didn't include today's changes).
cheers.
mercurial cannot save you. The data from mercurial is stored in a hidden directory in the base of your project folder. In your case, probably at my_project/.hg. Your recursive delete would have trashed this folder as well.
So maybe a file recovery tool?
No. The changes are only stored in the local repository directory (the .hg directory therein) until you've pushed. They're never put anywhere else (not even /tmp).
There is a possibility that you'll be able to recover the deleted files from the disk, though; search around for instructions and tools for doing that.
I'm afraid the commit is deleted together with the working copy and file recovery tools are your only option to recover the missing .hg folder. I see you could recover the code from the install — great!
If you're afraid of this happening again, then you could install a crude hook like
[hooks]
post-commit = R=~/backup-repos/$(basename "$PWD");
(hg init "$R"; hg push -f "$R") > /dev/null 2>&1 || true
That will forcibly push a copy of all your commits to a suitable repo under ~/backup-repos. The -f flag ensures that you will push a backup even if you play with extensions like rebase or mq that modify history. It will also allow pushing changesets from unrelated repos into the same backup repo — imagine two different repos named foo. So the backup repositories will end up with a gigantic pile of changesets after a while and you might want to delete them once in a while.
I tested this briefly and for everyday work I don't think you'll notice the overhead of the extra copy and you might thank yourself later :-)
I have an mercurial repositry a bitbucket.org and a clone on my wokstation. The clone has some uncommited (unfished) work in it. I have to copy these clone to my laptop because I will be on a trip for one or two weeks and want to do some work.
Is there a simple and save way to copy the repository with its uncommited changes to another device? I knew I could clone the repo from the workstation to my laptop but this won't copy uncommited work.
Simply copy the entire repository's folder.
Just commit that work. That work needs to be finished to be committed is left-over CVS/SVN thinking. Commit it, and then update to its parent and work on whatever else you want to work on. When eventually the work is done you're pushing a changegroup not individual changesets, so no one will have the uncompiling stage at the end of those interstitial changesets on them.
Avoiding committing work in Mercurial (using shelve, attic, copying repos, etc.) is the only way to lose work -- avoid it.
I prefer my first answer (commit it) but if you positively can't bring yourself to commit unfinished work then you should be using Mercurial Queues with a patch queue that lives in its own repository. This is easily done with:
hg qinit --create-repo
Then you import your uncomitted changes as a patch using:
hg qnew --force name-for-this-work
then you can:
hg qcommit -m "work in progress"
Then you can qclone that repo and get both the work in progress and the base repository on which it's overlayed. More details are available in the Mercurial book's chapter on queues.
Really, though, there's just never a good reason to have uncommitted work for more than an hour or two.
I'm trying to clone a rather large subversion repository with hgsubversion.
hg clone --startrev 8890 svn+https://my.reposit.ory/trunk trunk_hg
After about an hour, the clone operation aborts with an out of memory message:
[r20097] user: description
abort: out of memory
Is it possible to specify an end revision for the clone operation and get the remaining revisions with a pull? Or somehow break up the clone in smaller steps?
You can specify a stop revision with -r for clone, as others have suggested. Another option (if you kept the clone where things crashed) would be to just run hg pull in the trunk_hg copy. You might have to edit/create .hg/hgrc yourself to add the [paths]\n default = svn+https://my.reposit.ory/trunk, since I think we add that at the end of the cloning process. Maybe run hg svn rebuildmeta before your pull just for good measure in case the tracking metadata for hgsubversion got hosed when the OOM happened.
I hope this helps!
http://www.selenic.com/mercurial/hg.1.html#clone
Could try using the -r <revid> flag to clone only a particular changeset. Though that may or may not work with hgsvn.
Cloning with a limited range of revisions and then pulling is the recommended method and I can confirm that it works flawlessly for svn repositories in the several GB size range.
Here is a workaround to clone the whole svn repo:
start cloning
abort it immediately (Ctrl+C in windows)
than hg pull
you've got out of memory
repeat step 3 until you check out all commits