The Jenkins mercurial plugin runs an hg log command at the beginning to determine which commits are new for that build. Here's an example: hg log --template "<changeset node='{node}' author='{author|xmlescape}' rev='{rev}' date='{date}'><msg>{desc|xmlescape}</msg><added>{file_adds|stringify|xmlescape}</added><deleted>{file_dels|stringify|xmlescape}</deleted><files>{files|stringify|xmlescape}</files><parents>{parents}</parents></changeset>\n" --rev pcdmis2015:0 --follow --prune 4e2c98f139772300206e87349c4d7b63e1a17d05 --encoding UTF-8 --encodingmode replace
On my old, out of warranty win7 machines, this command takes between 20 and 90 seconds to complete, depending on the machine.
But on my new win10 virtual machines, which have shown to be faster in every other regard so far, this same command in the same repository takes about 4.5 hours.
Why might this be? What could be happening that takes so long?
Is there any way overcome or ameliorate this problem?
It can be different Mercurials (standalone vs "for Pythons")
It can be different versions|configurations of Pythons (if used)
It can be damaged repo (check hg verify)
hg log --debug --time --profile will show you main time-eaters (as last resort)
Does you new machine have a virus scanner running? They intercede in all file access and a log accesses a lot of files.
Related
Last night, I tried to clone the cpython hg repo, but after ~ 30 min. of waiting, I cancelled, because it didn't seem to be working. Based on process time, it seemed to be doing hardly anything. Was I simply too impatient? Or should hg clone be pretty fast?
I'd just downloaded the latest hg:
$ hg --version
Mercurial Distributed SCM (version 2.5.4+20130405)
I ran this on Mac OSX 10.8.3.
I was using a good Internet connection: Comcast Business Class over WiFi, with the wireless router under my desk.
Looks like it should take < 5 min.
$ time hg clone http://hg.python.org/cpython python-repo-2
requesting all changes
adding changesets
adding manifests
adding file changes
added 83508 changesets with 184511 changes to 9865 files (+1 heads)
updating to branch default
3677 files updated, 0 files merged, 0 files removed, 0 files unresolved
real 3m11.586s
user 1m44.192s
sys 0m6.959s
I'm pretty sure I waited longer than that last night. Maybe the repo was experiencing high traffic last night, but everything is OK today? I am using a different Internet connection today, so it could be that.
Hopefully, someone finds this one data point to be useful.
I clone a new repository by TortoiseHg version 2.1.3. Then do some change. When I do commit, I get this message as below.
My desktop drive mapping is connected to Linux server by Samba.
I am so appreciate if someone can help.
% hg commit --repository V:\htdocs\critical\mysite2 --verbose --user MyUser --message=testing Mercuial V:\htdocs\critical\mysite2/application/controllers/package.php
smartdox/application/controllers/package.php
transaction abort!
rollback completed
abort: The process cannot access the file because it is being used by another process
[command returned code 255 Fri Jan 13 14:30:17 2012]
mysite2%
For me changing the setting:
Global Settings -> TortoiseHg -> Monitor Repo Changes
to
localonly
helped.
The long discussion in the official bug tracker: https://bitbucket.org/tortoisehg/thg/issue/889/
I've seen this same problem, but I've noticed that "occasionally" I am able to commit changes. I think the 'another process' is something on the server.
When I fail to commit, hg gives an error saying (among other things) "transaction abort! rollback failed - please run hg recover".
If I run hg recover, sometimes that fails, too (in use by another process). If I wait a minute or two, then retry to recover, it often succeeds.
Once the recovery succeeds, if I wait another minute or two, then the commit often succeeds when I retry it.
My theory is that the server is indexing or virus-scanning the contents of .hg/
I don't know a guaranteed work-around, but on my small repository I can often get my changesets in if I give it a try or two. Your luck is likely to increase as the activity on your repository files decreases.
I don't really know about committing, but I know that Mercurial/TortoiseHG has issues when you push to a Linux drive which is mapped under Windows.
See these answers I wrote about it:
Mercurial remotes on the file system instead of http server
Can you 'push' to network share using Mercurial on 64bit Windows 7?
Maybe the same problems occur when the repository you're trying to commit to directly resides on a mapped Linux drive.
I'd suggest that you put the repository on a real Windows drive and try if you can commit there.
If yes, the problems you described are probably because of the Linux drive.
I'm on dialup in lousy place (yes, it still happens in 2011), and trying to clone a huge repository. It starts without problem, but every time the dialup disconnects (which is unavoidable, it seems), the !#%$* hg rolls everything back and I'm left again with an empty directory.
Is there a solution other than doing it on a remote PC and then downloading the whole thing by FTP or something?
In a bash-like shell you could do something like this:
$ hg init myclone
$ cd myclone
$ for REV in `seq 10 10 100` ; do hg pull -r $REV <REMOTEREPO>; done
Starting at 10, each pull downloads the next 10 revisions, up to 100. In case of a lost connection, adjust the first argument to seq to match what you've already pulled.
Depending on how flaky your connection is, there are two options for performing initial clones.
First, you can try so-called “streaming clones”. These minimize Time To First Byte, but do generally require a bit more data to be transferred.
Here’s how to do a streaming clone:
$ hg clone --uncompressed https://~~~~
Your second option will be a hg clone –-rev operation, followed by a number of incremental pulls. This behaves similarly to cloning a repository in some distant past and doing occasional updates.
$ hg clone --rev 5 https://~~~~
Based on the suggestions here,
I created a repo that did this.
https://github.com/nootanghimire/hg-clone-bash
It's optimized for a single repo, but i guess you can fork and work on it! :)
Does anybody know why hg status is slow (3-10 secs) the first time it's called from the command line on a windows client (I'm assuming it is cached after that).
hg status is a local operation and it should not take that long especially with empty repos.
This is the case on both an active repository with several changes and a brand new repo with no files. So the size of the repo does not seem to be a factor on the performance.
Thanks!
When you run the hg status command, Mercurial has to scan almost every directory and file in your repository so that it can display file status. Hg has to perform at least one expensive system call for each managed file to determine whether it's changed since the last time Mercurial checked, there's no avoiding that.
I believe the reason subsequent calls to hg st are faster is because of the cached information the OS retains about all recently accessed files —avoiding disk access if the file has not been modified—. Sometimes the files themselves may even remain memory mapped by the OS or cached altogether on the HDD buffer.
Edit: also, if you haven't invoked hg in a while, the OS will need to read the hg executable and its dependencies from disk, since they might not be cached on RAM already.
Got a bluescreen in windows while cloning a mercurial repository.
After reboot, I now get this message for almost all hg commands:
c:\src\>hg commit
waiting for lock on repository c:\src\McVrsServer held by '\x00\x00\x00\x00\x00\
x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
interrupted!
Google is no help.
Any tips?
When "waiting for lock on repository", delete the repository file: .hg/wlock (or it may be in .hg/store/lock)
When deleting the lock file, you must make sure nothing else is accessing the repository. (If the lock is a string of zeros or blank, this is almost certainly true).
When waiting for lock on working directory, delete .hg/wlock.
I had this problem with no detectable lock files. I found the solution here: http://schooner.uwaterloo.ca/twiki/bin/view/MAG/HgLockError
Here is a transcript from Tortoise Hg Workbench console
% hg debuglocks
lock: user None, process 7168, host HPv32 (114213199s)
wlock: free
[command returned code 1 Sat Jan 07 18:00:18 2017]
% hg debuglocks --force-lock
[command completed successfully Sat Jan 07 18:03:15 2017]
cmdserver: Process crashed
PaniniDev% hg debuglocks
% hg debuglocks
lock: free
wlock: free
[command completed successfully Sat Jan 07 18:03:30 2017]
After this the aborted pull ran sucessfully.
The lock had been set more than 2 years ago, by a process on a machine that is no longer on the LAN. Shame on the hg developers for a) not documenting locks adequately; b) not timestamping them for automatic removal when they get stale.
Coworker had this exact problem today, after a BSoD while trying to push. He had to:
delete the file .hg/store/lock (as per the accepted answer)
delete the file .hg/store/phaseroots (as per this TortoiseHG bug report)
Then his repo worked again.
EDIT: As per #Marmoute's comment - when dealing with lock-related issues, using hg debuglock is a safer alternative to blindly deleting the .hg/store/lock file.
I am very familiar with Mercurial's locking code (as of 1.9.1). The above advice is good, but I'd add that:
I've seen this in the wild, but rarely, and only on Windows machines.
Deleting lock files is the easiest fix, BUT you have to make sure nothing else is accessing the repository. (If the lock is a string of zeros, this is almost certainly true).
(For the curious: I haven't yet been able to catch the cause of this problem, but suspect it's either an older version of Mercurial accessing the repository or a problem in Python's socket.gethostname() call on certain versions of Windows.)
I had the same problem. Got the following message when I tried to commit:
waiting for lock on working directory of <MyProject> held by '...'
hg debuglock showed this:
lock: free
wlock: (66722s)
So I did the following command, and that fixed the problem for me:
hg debuglocks -W
Using Win7 and TortoiseHg 4.8.7.
I had the same problem on Win 7.
The solution was to remove following files:
.hg/store/phaseroots
.hg/wlock
As for .hg/store/lock - there was no such file.
I do not expect this to be a winning answer, but it is a fairly unusual situation.
Mentioning in case someone other than me runs into it.
Today I got the "waiting for lock on repository" on an hg push command.
When I killed the hung hg command I could see no .hg/store/lock
When I looked for .hg/store/lock while the command was hung, it existed. But the lockfile was deleted when the hg command was killed.
When I went to the target of the push, and executed hg pull, no problem.
Eventually I realized that the process ID on the hg push was lock waiting message was changing each time. It turns out that the "hg push" was hanging waiting for a lock held by itself (or possibly a subprocess, I did not investigate further).
It turns out that the two workspaces, let's call them A and B, had .hg trees shared by symlink:
A/.hg --symlinked-to--> B/.hg
This is NOT a good thing to do with Mercurial. Mercurial does not understand the concept of two workspaces sharing the same repository. I do understand, however, how somebody coming to Mercurial from another VCS might want this (Perforce does, although not a DVCS; the Bazaar DVCS reportedly can do so). I am surprised that a symlinked REP-ROOT/.hg works at all, although it seems to except for this push.
If the locked repo was the original, I can't imagine it was modifying it to clone it, so it was only preventing you from changing it in the middle and messing up the clone. It should be fine after removing the lock.
The new cloned copy (if it was a local clone) could be in any sort of malformed state, though, so you should throw it out and start it over. (If it was a remote clone, I would hope it failed and already threw out the incomplete copy.)
I encountered this problem on Mac OS X 10.7.5 and Mercurial 2.6.2 when trying to push. After upgrading to Mercurial 3.2.1, I got "no changes found" instead of "waiting for lock on repository". I found out that somehow the default path had gotten set to point to the same repository, so it's not too surprising that Mercurial would get confused.
If it only happens on mapped drives it might be bug https://bitbucket.org/tortoisehg/thg/issue/889/cant-commit-file-over-network-share. Using UNC path instead of drive letter seems to sidestep the issue.