A coworker got this error after pulling from the repo. I searched for an answer online on how to solve it but couldn't find anything. I figured out a way to solve it so posting it below for anyone else with the same issue.
I asked everyone else working on the repository to check their user cache folder (C:\Users\username\AppData\Local\largefiles on Windows) to see if they had a file with that id ("XXX" from the title).
One of them did, the original author of the file.
I asked him to send it to me, I remote connected to the server that has the central repo. I then copied the file both to the server's cache and into .hg\largefiles
The user could then pull again and push and everything worked.
LF extension seems not compatible with keyword extension.
With these both extensions, at the commit, the LF is NOT put in the configured folder on the PC and then raises this error at push.
If you disable the keyword extension, it's perfectly working.
Unfortunately, I've not found any additional explanations.
If someone could provide a stable solution, it will be great.
It looks like hg pull is happily sparse, but hg push is not; thus you need all largefiles for every revision not already present on the new remote, so that it can populate for history and allow clients to successfully pull at any revision. Which makes sense.
When migrating to a new hg server, I hit this issue. The "within Mercurial" solution was to download all largefiles, for all commits, to my local repo, and then push to the new server repo:
$ hg lfpull --rev 1-tip
$ hg push newbox
(Disclaimer: my Mercurial-fu is weak, I only use it for this one largefile repo)
Related
I created a mercurial repository on some file servers net share.
Is it possible to automatically get the remote repository updated to tip if somebody pushes its changes?
Because some other people (purely users) may copy the repositories content (rather than cloning, because of lack of .hg) and i want them to get the newest version.
Since it is a share on a simple NAS it would be good if the pushing client could invoke this update.
It seems that a hook on the changegroup event can solve this.
Add the following lines to the repository's configuration file (repo/.hg/hgrc)
[hooks]
changegroup = hg update
This solution was suggested on a slightly different question:
Cloning mercurial repo to the remote host
At least under windows this seems only to work on local repositories. The reason for this is, that hg tries run a cmd on the remote path that fails, since it does not support UNC paths as current direcory.
Adding explicitly the repository url fixes this, but its not client independent anymore.
[hooks]
changegroup = hg update -R %HG_URL%
You could treat the server repository as your "local working directory" and then PULL from your own PC to that location. If you use hg pull --update then it will automatically update the working folder to the latest.
One way to do this is to login to your NAS and physically run the hg command line program there. If not, you could also mount the NAS folder on your local PC and then chdir to its mapped local folder and use your local hg client to do so.
This might seem like an odd thing to do but Mercurial doesn't care which is the "clone" and which is the "server", you can swap them interchangeably in your workflow.
I need to control the version of a few files accessible via an SMB share. These files will be modified by several people. The files themselves are directly used by a web server.
Since these are production files I wanted to force the users to pull a local copy, edit them, commit and push them back. Unfortunately there is no Mercurial server on that machine.
What would be the appropriate way to configure Mercurial on my side so that:
the versioning (.hg directory) is kept on the share
and that the files on the share are at the latest version?
I do not have access to this server (other than via the share). If I could have a mercurial server on that machine I would have used a hook to update the files in the production directory (I am saying this just to highlight what I want to achieve - this approach is not possible as I do not control that server)
Thanks!
UPDATE: I ended up using an intermediate server (which I have control over). A hook on changegroup triggers a script which i) hg update to have fresh local files ii) copies them to the SMB share
EDIT 1 Following discussions in comments with alex I have looked at the verbose version of the command line output. The \\srv\hg\test1 repo has a [hooks] section with changegroup = hg update. The output from a hg push -v gives some insights:
pushing to \\srv\hg\test1
query 1; heads
(...)
updating the branch cache
running hook changegroup: hg update
'\\srv\hg\test1'
CMD.EXE was started with the above path as the current directory.
UNC paths are not supported. Defaulting to Windows directory.
abort: no repository found in 'C:\Windows' (.hg not found)!
warning: changegroup hook exited with status 255
checking for updated bookmarks
listing keys for "bookmarks"
If I understand correctly the output above:
a cmd.exe was triggered on the client, even though the [hook] was on the receiving server
it tried to update the remote repo
... but failed because UNC are not supported
So alex's answer was correct - it just does not work (yet?) on MS Windows. (Alex please correct me in the comments if I am wrong)
If I understood correctly, you are looking for two things:
A repository hook that will automatically update the production repo to the latest version whenever someone pushes to it. This is simple: You're looking for the answer to this question.
If you can rely on your co-workers to always go through the pull-commit-push process, you're done. If that's not the case, you need a way to prevent people from modifying the production files in place and never committing them.
Unfortunately, I don't think you can selectively withhold write permissions to the checked-out files (but not to the repo) on an SMB share. But you could discourage direct modification by making the location of the files less obvious. Perhaps you could direct people to a second repository, configured so that everything pushed to it is immediately pushed on to the production repository. This repo need not have a checked-out version of the files at all (create it with hg clone -U, or do an hg update -r 0 afterwards), eliminating the temptation to bypass mercurial.
What prevents you from mount your Samba share and run hg init there? You don't need mercurial server (hg serve or more sophisticated things) to perform push/pull operations.
When I create repository and push on server and when we clone the repository in local system the files are come with red signal means they are changed.
When we compare both repository I found that the content of files in .hg folder is changed.
Can anyone pls tell me how to remove this problem!
Edit:
When we change the .hg folder the red icon becomes green!!!!
If you take 1 modified (changed) file, watch the diff closely, and only see the difference is in new lines only, this is the classical newlines mess.
(happens to most people when working crossplatform)
There is a ready to use Mercurial Extension, taking care of this is problem.
It's called eol.
Learn how to use it and the problem from here:
https://www.mercurial-scm.org/wiki/EolExtension
how do you push localy created repository to server? If there is no repo with same name(on server), you could not be able to create remote repo by push, you have to clone it to the server. Or, if there already is repository with same name, and you push some new localy created, there definitely will be something more in .hg on the server then on the local. Check if there isn't repo with same name on the server already. HTH
I'm a VERY happy user of bitbucket and mercurial after years of putting up with subversion (and CVS, SourceSafe and others; anybody remember SCCS?). I've considerable project history now in my local hg repo, pushed daily to bitbucket and thence to my home machines.
Problem is, my company wants me to maintain a copy of this history in subversion. And I've hit a stone wall trying to set this up. I've installed hgsubversion, I think correctly. And I've used svnadmin to create an empty svn repository ready to hold the hg history.
But now what? The instructions say to start by pulling a clone (of nose? what's that? I assume this means checkout a copy of the new empty svn directory. OK did that.
But now what? I assume the next step is to push my real local hg clone to the empty svn repo I just checked out. But nothing I've tried will do this. Pull fails as follows, reporting "unrelated repository" (as I recall I gate it the URL of my master local hg repo to get around the "Needs a URL" popup on everything else I've tried.
found new changeset 139d02f4b233
examining 4e97a23b6815:342df9e52cec
abort: repository is unrelated
[command returned code 255 Mon Apr 25 11:29:33 2011]
The result of all this fumbling around is a directory with .hg, .svn and .hgignore entries and nothing else.
So, I feel I'm missing something basic that hundreds of others must have tried by now. Can someone please help me get started? Thanks!
PS: Currently the intent is to maintain SVN permanently as the team repo and push changes there from Hg periodically which would remain the main client for me indefinitely. In case this matters...
You can use the convert extension:
hg convert --dest-type svn mercurialrepo svnrepo
And hgsubversion allows you keep both of them in sync ( bidirectional)
Answered here at SO already:
Converting from Mercurial to Subversion
Migrating from Mercurial to Subversion
Perhaps look at the answers to this question.
Hgsubversion is for working with a repository cloned from SVN using Hg, not the other way around.
I am new to Mercurial HG. My friends created a repo and I am going to use it.
I installed TortoiseHG and trying to get the latest code. I found that when using Clone operation, it will pull all code to my local, including the histories (Am I right?). This is not needed for me. I just wanna get the latest code. Is there an operation for this?
In short, no.
In a bit longer: Mercurial doesn't yet support “shallow” clones where you only get part of the history. So each time you clone you pull in the entire repository with all changesets.
Additionally, unlike Subversion, there is no way to make a “narrow” clone where you only checkout a portion of a repository. For example, if a repository has directories foo/ and bar/, there is no way to get only the bar/ directory. In other words, Mercurial always operates on project-wide snapshots.
The easiest way to achieve what you want:
hg archive [destination folder]
Once you cloned a repository, to get the code of the "tip" (the last version of the current branch - the default one if not precised) you just need to update.
You have an update action in TortoiseHG. Once done, you can look at the files in the folder.
If you wanted another state of the repository (an old version, or an old tagged state) then it's still the update command, with other parametters (see the docs or the TortoiseHG interface).
If you only want the latest code, and you don't intend to do anything related to the repository with it, like commit, or diff to older versions, or whatever, then you it depends on where you got the code from and how.
If he is using one of the hosting services, like bitbucket, there's usually a download link which gives you just the source code.
For instance, if you go here, there's a "Get source" link up and to the right which gives you a few choices in the file format (zip or whatnot.)
If you got the files somewhere else, you need to explore the interface you got them from. Try just pasting the link you cloned from into your browser and see what you get.
Sure. Clone the repository, then delete the .hg subdirectory.
I might be a bit late but actually it is possible to forget some history with Mercurial. You just need to enable convert extension from Your mercurial.ini file or .hgrc file.
[extensions]
hgext.convert=
Now you are able to use convert extension to "clone" only changesets starting from the revision specified.
hg convert --config convert.hg.startrev=[wheretostart] path_to_full_history_repo path_to_new_repo
Just note that this is not the same operation with hg clone. That's why the source repository must be a local repository. For example if we have a repository in folder MyProject and we want to forget all the changes done before revision 100. We can use the following command:
hg convert --config convert.hg.startrev=[100] MyProject MyShrinkedProject
If You are going to use this shrunken repository on a "central server" remember to take care of that everybody clones it before they continue working. Repositories are not compatible with each other anymore.
Mercurial now supports shallow clone using remotefilelog extension. Extension is bundled with mercurial probably since version 4.9. Older versions need to download the extension e.g. from github.
You have to enable it on the server e.g:
[extensions]
remotefilelog =
[remotefilelog]
server = True
serverexpiration = 14
and on client
[extensions]
remotefilelog =
[remotefilelog]
cachepath = /some/path
cachelimit = 5 GB
Than you can do shallow clone with much smaller footprint a and faster clone speed:
hg clone --shallow ssh://user#server/repo