I'm trying to find the fastest way to get all the files from a particular changeset; I don't care about the history.
The best thing I've found so far is to cd into the repository, and then
hg archive -u <changeset> /path/to/put/files
But I'm finding even that is quite slow on a large repository, even though it doesn't create the .hg folder.
I was thinking maybe I could use hg update instead because that seems to run a bit faster, but I don't want to update the current directory, I want the files to be written to a new directory outside the current repo. Is that possible?
Or is there some other quick way?
You can try the following to speed things up a bit:
hg archive --config ui.archivemeta=false /path/to/target
This option suppresses the generation of .hg_archival.txt, which can sometimes be relatively expensive to generate.
Also, hg update can be faster locally because not all files have to be updated (only the ones that were changed relative to the current revision). A fresh update after a hg clone -U or hg share -U should not be any faster.
You can try a recursive hardlink (via cp -al or a similar tool) followed by hg update, but then you have to be extra careful, because modifying files in one checkout can affect the ones in the other. This is only an option if you want a read-only copy or if you know (as in, positively know, not just guess) that you won't do any in-place modification of any of the files.
Related
I am using Mercurial(TortoiseHG) as version-control system for our source code. I am unable to remove a file from the repository, from the command-line. I see several people on web, giving solutions like:
1. hg rm
2. hg remove
These commands do the operation of removal from the directory. However, when I pull the repository at a seperate place, the above file(supposed to be deleted) still shows up. I tried pusing the repo after performing the above commands as well, with:
hg push
But the files are not really removed from the repository. Do I need to configure anything extra for this removal operation?
I generally have the habit of committing at the leaf-level, and thus wasn't committing the root repo folder all the while. Sorry for the miscommunication.
Say I type hg add in Mercurial, and there a bunch of untracked files in my working directory that are not ignored. What is the easiest way to un-add all those files without explicitly typing the name of each file?
Can I just un-add them all with one command?
Preface
You must always ask questions, which contain as much information as possible. Because now your question, depending from some conditions, may have totally different answers.
Case One - no local modifications in already versioned files, only added (and not committed) files
hg revert will return your working directory to the state after the last commit, undoing all changes it it.
Case One - local edits, which you want to save and occasionally added files
Read about filesets in Mercurial.
Use fileset in the hg forget command, something like hg forget "set:added()".
Use hg revert or hg forget on the files (both do the same for a file you ran hg add on). To avoid typing out the filenames, you can use a fileset like this:
$ hg revert "set:added()"
This will revert the file back to how it looked in the working copy parent revision, i.e., it will become unknown again.
hg revert -r .^ path-to-file will revert the commit from the commit-set.
then commit and submit (if using jelly fish) and you'll see the files removed from the changeset. I don't know why .^ works yet, but somebody will probably know.
You could always just re-clone your repository and then replace (delete existing and then copy new) the .hg directory in your working folder with the one from the fresh clone... (assuming you have no pending commits..)
I want to "clear" my working directory for the moment (less space requirements for SSD and drive backups)
Specifically, I want to know if I can update to revision -1 (so that mercurial clears everything that is not itself).
Can this be done using a mercurial command? (I'll write a script if I have too, but it's advantageous to share a command with others rather than writing scripts that do the "right" thing)
If you run hg update null, it should remove everything except the .hg directory and any files not tracked by the repository.
If there were untracked files you can remove them as well using hg purge. Purge is an extension but it is distributed together with mecurial so you just have to enable it.
If you have uncommitted changes and don't care about preserving them, hg update -C null will take care of getting rid of them; all you will have left after this are the .hg directory and untracked files.
I have a mercurial repository my_project, hosted at bitbucket. Today I made a number of changes and commited them to my local repository, but didn't push them out yet.
I then majorly stuffed up and fatfingered rm -rf my_project (!!!!!).
Is there some way I can retrieve the changes that I committed today, given that I hadn't pushed them out yet? I know a day's worth of commits doesn't sound like much, but it was!
All the other clones I have of this project are only up-to-date to the most recent push (which didn't include today's changes).
cheers.
mercurial cannot save you. The data from mercurial is stored in a hidden directory in the base of your project folder. In your case, probably at my_project/.hg. Your recursive delete would have trashed this folder as well.
So maybe a file recovery tool?
No. The changes are only stored in the local repository directory (the .hg directory therein) until you've pushed. They're never put anywhere else (not even /tmp).
There is a possibility that you'll be able to recover the deleted files from the disk, though; search around for instructions and tools for doing that.
I'm afraid the commit is deleted together with the working copy and file recovery tools are your only option to recover the missing .hg folder. I see you could recover the code from the install — great!
If you're afraid of this happening again, then you could install a crude hook like
[hooks]
post-commit = R=~/backup-repos/$(basename "$PWD");
(hg init "$R"; hg push -f "$R") > /dev/null 2>&1 || true
That will forcibly push a copy of all your commits to a suitable repo under ~/backup-repos. The -f flag ensures that you will push a backup even if you play with extensions like rebase or mq that modify history. It will also allow pushing changesets from unrelated repos into the same backup repo — imagine two different repos named foo. So the backup repositories will end up with a gigantic pile of changesets after a while and you might want to delete them once in a while.
I tested this briefly and for everyday work I don't think you'll notice the overhead of the extra copy and you might thank yourself later :-)
So, I'm trying to checkout just the TestNG plugin from the Netbeans contrib repository. (Or is it module? I'm new to Mercurial, so I don't really know the lingo yet.)
When I run the following command...
hg clone http://hg.netbeans.org/main/contrib/
...I get the entire repository, which contains all of the the contrib plug-ins. Is it possible to just pull this location?
http://hg.netbeans.org/main/contrib/file/tip/testng/
Thanks!
This concept is called "narrow cloning" and no, it's not possible at the moment in Mercurial.
It's on the radar of some of us that contribute to Mercurial but it's a hard problem to solve. For example:
How do you calculate the hash of any new commits you make if you don't have all of the files in the repo?
What happens if you try to view the history of a file in contrib/testng if that file was moved from another folder?
I'm not sure, but I think the answer in the general case is "probably not".
If the repository is local (it doesn't sound like it is in your case), you can do something like:
hg archive -R /path/to/my/repo -I /path/to/my/repo/folder/i/want export-folder-name
(The command would need to be something that exports non-VC'd files, rather than creating a partial repo, since the .hg stuff is stored once at the toplevel, rather than in pieces in each folder as SVN does.)
It doesn't work on remote repositories, though. Neither does "hg log", and the hg folks explained why:
Imagine I send a log -p command to http://www.kernel.org/hg/linux-2.6, which is
approaching 100k changesets. At one diff per second (lots of seeking), this will
take about 3 hours of CPU/disk time on the server, nevermind metric tons of
bandwidth. It would be faster and simpler for everyone just to clone the repo
and do the log locally.
I suspect hg archive can't work remotely for the same reason.