I have a repo on a network drive (served by Windows server), with local repos pushing/pulling to it on the various machines I work on.
I just dealt with this problem, and solved it by cloning the repo from the network drive to a local disk, pushing, then cloning it back again. The machine from which I did this had not problem pushing further changes after this.
Now I just tried pushing from my laptop, and this happens:
% hg --debug push "Z:\[main repo]"
pushing to Z:\[main repo]
query 1; heads
searching for changes
all remote heads known locally
listing keys for "bookmarks"
2 changesets found
list of changesets:
2ed25c8975482734e3b9eed828573fd711d26fd8
19a424c011ffd0c887cf1d54ed0b537a6c1af714
adding changesets
add changeset 2ed25c897548
add changeset 19a424c011ff
adding manifests
adding file changes
adding GEM.py revisions
transaction abort!
rollback completed
abort: No usable temporary file name found
[command returned code 255 Thu Mar 09 18:51:11 2017]
The only info pertaining to this error message I have found so far is this, and I definitely have no files named con.*in my project. There are several named con*.py but they have never been a problem, and both the laptop and my workstation are running Windows 7, and I've been working on this project for a few years now.
I have happily pushed from this laptop for over a year, and it was never a problem. I don't really have any good idea where to even start looking. Could it be connected to the fact that my workstation had the main repo opened at the same time? It was definitely not doing anything to it at the time.
Update:
I ran hg verify, and this is what it returns -- no problem as far as aI can tell
% hg --debug verify
repository uses revlog format 1
checking changesets
checking manifests
crosschecking files in changesets and manifests
checking files
73 files, 74 changesets, 226 total revisions
[command completed successfully Fri Mar 10 08:58:02 2017]
I had faced the same error as well.
I just ran tortoise hg as as administrator and that fixed it for me
I don't have an answer yet but I would try the following:
Update to the latest mercurial version (4.1) and try again
Verify the repo integrity with hg verify
Although I understand it always worked as is, try to rename all the con.py files. The thing with CON is that it represents a device, I think it comes from DOS times :-)
If I understand correctly, you push to Z:[main repo] where Z: is a Windows share. Try to push to the same repo in another way, with ssh (requires some setup, yes)
Good luck, very bizarre problem :-/
Related
I have noticed by mercurial repository expanding in size when ever I use repo B to pull changes from repo A.
It seems that TortoiseHG creates files like hg-bundle-r3e6uf.hg10un under .hg directory. These files are usually 1-2MB in size each, so nothing too big, but together they create a lot, and can be an annoyance when doing backups.
This does not seem to happen if I pull changes instantly without reviewing them, or if I use repo A to push changes to B.
These bundle files seem useless as they are not copied when cloning the repository B.
Also the cloned repo is almost half smaller without them, so it is like data in these files wasn't moved to other files either.
Is it possible to:
A) Avoid creating these bundles on pull. (Pushing is option only when I have access to both repos)
B) Use some command to cleanup .hg directory. (Cloning is not very elegant)
EDIT:
When I select 'Incoming' first bundle is created:
% hg --repository C:\temp\hg\testB incoming --quiet --bundle c:\docume~1\username\locals~1\temp\thg.hlngus\CtemphgtestA_iavzew.hg C:\temp\hg\testA
1:d806c8cb0355
2:e0e3b20d5cb2
3:4e803a7ecefc
[command completed successfully Fri Aug 02 09:59:12 2013]
and then 'Accept', the second bundle is created:
% hg --repository C:\temp\hg\testB pull --verbose c:\docume~1\username\locals~1\temp\thg.hlngus\CtemphgtestA_iavzew.hg
pulling from c:\docume~1\username\locals~1\temp\thg.hlngus\CtemphgtestA_iavzew.hg
searching for changes
all local heads known remotely
3 changesets found
adding changesets
adding manifests
adding file changes
added 3 changesets with 3 changes to 1 files
(run 'hg update' to get a working copy)
[command completed successfully Fri Aug 02 10:00:10 2013]
Where as using 'Pull' directly, no extra bundles are created:
% hg --repository C:\temp\hg\testB pull --verbose C:\temp\hg\testA
pulling from C:\temp\hg\testA
searching for changes
all local heads known remotely
3 changesets found
adding changesets
adding manifests
adding file changes
added 3 changesets with 3 changes to 1 files
(run 'hg update' to get a working copy)
[command completed successfully Fri Aug 02 10:01:52 2013]
It seems this is TortoiseHg specific issue. Solution is either use push or use pull directly from command line to avoid extra bundles. Only (safe) way to cleanup seems to be the repository cloning.
I clone a new repository by TortoiseHg version 2.1.3. Then do some change. When I do commit, I get this message as below.
My desktop drive mapping is connected to Linux server by Samba.
I am so appreciate if someone can help.
% hg commit --repository V:\htdocs\critical\mysite2 --verbose --user MyUser --message=testing Mercuial V:\htdocs\critical\mysite2/application/controllers/package.php
smartdox/application/controllers/package.php
transaction abort!
rollback completed
abort: The process cannot access the file because it is being used by another process
[command returned code 255 Fri Jan 13 14:30:17 2012]
mysite2%
For me changing the setting:
Global Settings -> TortoiseHg -> Monitor Repo Changes
to
localonly
helped.
The long discussion in the official bug tracker: https://bitbucket.org/tortoisehg/thg/issue/889/
I've seen this same problem, but I've noticed that "occasionally" I am able to commit changes. I think the 'another process' is something on the server.
When I fail to commit, hg gives an error saying (among other things) "transaction abort! rollback failed - please run hg recover".
If I run hg recover, sometimes that fails, too (in use by another process). If I wait a minute or two, then retry to recover, it often succeeds.
Once the recovery succeeds, if I wait another minute or two, then the commit often succeeds when I retry it.
My theory is that the server is indexing or virus-scanning the contents of .hg/
I don't know a guaranteed work-around, but on my small repository I can often get my changesets in if I give it a try or two. Your luck is likely to increase as the activity on your repository files decreases.
I don't really know about committing, but I know that Mercurial/TortoiseHG has issues when you push to a Linux drive which is mapped under Windows.
See these answers I wrote about it:
Mercurial remotes on the file system instead of http server
Can you 'push' to network share using Mercurial on 64bit Windows 7?
Maybe the same problems occur when the repository you're trying to commit to directly resides on a mapped Linux drive.
I'd suggest that you put the repository on a real Windows drive and try if you can commit there.
If yes, the problems you described are probably because of the Linux drive.
I'm working on my thesis and storing changes in mercurial. I'm not getting an error that I've got multiple heads, which I'm confused about -- I've only got one working repository which I push occasionally to bitbucket. Here's what happened:
$ hg commit -m "Fixing up..."
abort: No such file or directory: /Users/me/Dropbox/thesis/thesis_tex/simple.pdf
$ hg commit -m "Adding in page headers"
created new head
...one more commit, not having realized that having created a new head was a problem...
$ hg push
pushing to ssh://hg#bitbucket.org/...
searching for changes
abort: push creates new remote heads!
(did you forget to merge? use push -f to force)
$ hg heads
changeset: 26:3823a395b1ce
tag: tip
user: me
date: Fri Aug 26 09:39:45 2011 -0400
summary: Adding...
changeset: 24:c7c6517d4281
user: me
date: Thu Aug 25 16:34:42 2011 -0400
summary: Fixing up...
How can I easily get rid of the other head, without messing up my working directory? Why did a second head get created? Is there a problem with keeping mercurial repositories in dropbox folders?
How can I easily get rid of the other head, without messing up my working directory?
Merge the branches with hg merge.
Is there a problem with keeping mercurial repositories in dropbox folders?
This seems redundant to me, since Dropbox keeps a 30-day history on your files. I'd choose either Hg+Bitbucket (or whatever Mercurial hosting) or Dropbox, but not both.
Edit: Why not use Mercurial and Dropbox? Here's why.
Ben Hughes said it well:
By keeping your Mercurial repo on Dropbox you’re version controlling your version control system files. If you somehow manage to cause a conflict with files in your .hg directory, things could get messy. Recoverable, but messy.
I disagree with #Matt Ball on the dropbox matter.
I think redundancy is your friend. Also the purpose of Hg and the dropbox is slightly different in your case.
Dropbox backs up for you automatically (as well as tracking file changes and letting you work on any machine with internet and making it easy to share your work.) and Hg lets you work offline and commit in logical chunks (as well as working as backup when you sync).
I have used Hg in my dropbox folder without problem for at least a year and I have not experienced any problems.
If you work on more than one machine and have the dropbox/Hg combo on all machines I suggest that you let dropbox do the syncing.
I just did hg pull on a repository and brought in some changesets. It said to run hg update, so I did. Unfortunately, when I did that, it failed with the following error message:
abort: integrity check failed on 00manifest.i:173!
When I run hg verify, it tells me there are a number of issues with things not in the manifest (with some slight path obscuring):
>hg verify
checking changesets
checking manifests
crosschecking files in changesets and manifests
somewhere1/file1.aspx#172: in changeset but not in manifest
somewhere2/file1.pdf#170: in changeset but not in manifest checking files
file3.csproj#172: ee005cae8058 not in manifests
somewhere2/file1.pdf#171: 00371c8b9d95 not in manifests
somewhere3/file1.ascx#170: 5c921d9bf620 not in manifests
somewhere4/file1.ascx#172: 23acbd0efd3a not in manifests
somewhere5/file1.aspx#170: ce48ed795067 not in manifests
somewhere5/file2.aspx#171: 15d13df4206f not in manifests
1328 files, 174 changesets, 3182 total revisions
8 integrity errors encountered!
(first damaged changeset appears to be 170)
The source repository passes hg verify just fine.
Is there any way to recover from an integrity check failure or do I need to re-clone the repository completely from the source (not a huge issue in this case)? What could I have done to cause this, so I don't do it again?
Well, since the first damaged changeset is 170, you could clone your local repository to 169 and then pull from the source. That means only pulling 5 changesets.
hg clone -r 169 damagedrepo fixedrepo
cd fixedreop
hg verify
And then:
hg pull originalsource
As for manual recovery of repository corruption, this page expounds on that better than I can. See section 4:
I have found corruption once in a while before, and although the above
documentation says it is usually from user error, my instances were on
removable USB drives with empty working directories. Sometimes things
just don't get written correctly or are interfered with somehow: it's
not always user error. But I always have multiple copies I can reclone
from so I've been able to get away with basic fixing.
If the simple fix of a partial local clone and pulling from the server doesn't fix it, you're down to 2 options after backing up your changes (if any) to a bundle or patches:
Manually hacking at Mercurial's files.
Doing a new full clone from the server. Usually the easier and faster of the two.
Beware: This method will change all hashes.
Actually there is another way to recover the repository when it is corrupted like this -
You can do a complete rebuild of the repository by using the convert extension. See Section 4.5 on https://www.mercurial-scm.org/wiki/RepositoryCorruption#Recovery_using_convert_extension
First enable the convert extension by adding the following to your ~/.hgrc file
[extensions]
convert=
Then convert the bad repo to create a fixed repo:
$ hg convert --config convert.hg.ignoreerrors=True REPO REPOFIX
This worked for me when I had the experience of suddenly finding that there were missing files in the manifests - "error 255".
Try remove your file 00manifest.i from repo and next use hg remove 00manifest.i and hg commit commands. Worked for me.
What we ended up doing was making a new copy of our 'central' repository, deleting the .hg folder in this copy, creating a new repository there (hg init), and then working with this as the central repository.
Be aware however this is only an appropriate solution if you don't need your changeset history other than as a reference (which we don't). You can still use your old central repository for this purpose.
Got a bluescreen in windows while cloning a mercurial repository.
After reboot, I now get this message for almost all hg commands:
c:\src\>hg commit
waiting for lock on repository c:\src\McVrsServer held by '\x00\x00\x00\x00\x00\
x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
interrupted!
Google is no help.
Any tips?
When "waiting for lock on repository", delete the repository file: .hg/wlock (or it may be in .hg/store/lock)
When deleting the lock file, you must make sure nothing else is accessing the repository. (If the lock is a string of zeros or blank, this is almost certainly true).
When waiting for lock on working directory, delete .hg/wlock.
I had this problem with no detectable lock files. I found the solution here: http://schooner.uwaterloo.ca/twiki/bin/view/MAG/HgLockError
Here is a transcript from Tortoise Hg Workbench console
% hg debuglocks
lock: user None, process 7168, host HPv32 (114213199s)
wlock: free
[command returned code 1 Sat Jan 07 18:00:18 2017]
% hg debuglocks --force-lock
[command completed successfully Sat Jan 07 18:03:15 2017]
cmdserver: Process crashed
PaniniDev% hg debuglocks
% hg debuglocks
lock: free
wlock: free
[command completed successfully Sat Jan 07 18:03:30 2017]
After this the aborted pull ran sucessfully.
The lock had been set more than 2 years ago, by a process on a machine that is no longer on the LAN. Shame on the hg developers for a) not documenting locks adequately; b) not timestamping them for automatic removal when they get stale.
Coworker had this exact problem today, after a BSoD while trying to push. He had to:
delete the file .hg/store/lock (as per the accepted answer)
delete the file .hg/store/phaseroots (as per this TortoiseHG bug report)
Then his repo worked again.
EDIT: As per #Marmoute's comment - when dealing with lock-related issues, using hg debuglock is a safer alternative to blindly deleting the .hg/store/lock file.
I am very familiar with Mercurial's locking code (as of 1.9.1). The above advice is good, but I'd add that:
I've seen this in the wild, but rarely, and only on Windows machines.
Deleting lock files is the easiest fix, BUT you have to make sure nothing else is accessing the repository. (If the lock is a string of zeros, this is almost certainly true).
(For the curious: I haven't yet been able to catch the cause of this problem, but suspect it's either an older version of Mercurial accessing the repository or a problem in Python's socket.gethostname() call on certain versions of Windows.)
I had the same problem. Got the following message when I tried to commit:
waiting for lock on working directory of <MyProject> held by '...'
hg debuglock showed this:
lock: free
wlock: (66722s)
So I did the following command, and that fixed the problem for me:
hg debuglocks -W
Using Win7 and TortoiseHg 4.8.7.
I had the same problem on Win 7.
The solution was to remove following files:
.hg/store/phaseroots
.hg/wlock
As for .hg/store/lock - there was no such file.
I do not expect this to be a winning answer, but it is a fairly unusual situation.
Mentioning in case someone other than me runs into it.
Today I got the "waiting for lock on repository" on an hg push command.
When I killed the hung hg command I could see no .hg/store/lock
When I looked for .hg/store/lock while the command was hung, it existed. But the lockfile was deleted when the hg command was killed.
When I went to the target of the push, and executed hg pull, no problem.
Eventually I realized that the process ID on the hg push was lock waiting message was changing each time. It turns out that the "hg push" was hanging waiting for a lock held by itself (or possibly a subprocess, I did not investigate further).
It turns out that the two workspaces, let's call them A and B, had .hg trees shared by symlink:
A/.hg --symlinked-to--> B/.hg
This is NOT a good thing to do with Mercurial. Mercurial does not understand the concept of two workspaces sharing the same repository. I do understand, however, how somebody coming to Mercurial from another VCS might want this (Perforce does, although not a DVCS; the Bazaar DVCS reportedly can do so). I am surprised that a symlinked REP-ROOT/.hg works at all, although it seems to except for this push.
If the locked repo was the original, I can't imagine it was modifying it to clone it, so it was only preventing you from changing it in the middle and messing up the clone. It should be fine after removing the lock.
The new cloned copy (if it was a local clone) could be in any sort of malformed state, though, so you should throw it out and start it over. (If it was a remote clone, I would hope it failed and already threw out the incomplete copy.)
I encountered this problem on Mac OS X 10.7.5 and Mercurial 2.6.2 when trying to push. After upgrading to Mercurial 3.2.1, I got "no changes found" instead of "waiting for lock on repository". I found out that somehow the default path had gotten set to point to the same repository, so it's not too surprising that Mercurial would get confused.
If it only happens on mapped drives it might be bug https://bitbucket.org/tortoisehg/thg/issue/889/cant-commit-file-over-network-share. Using UNC path instead of drive letter seems to sidestep the issue.