I am working with a mercurial repository and switched from installing hg via one package manager (nix) to another (guix).
Now I get the error message
% hg pull
abort: accessing `persistent-nodemap` repository without associated fast implementation.
(check `hg help config.format.use-persistent-nodemap` for details)
And cannot interact with the repo anymore. How can I circumvent hg's attempt at trying to use persistent-nodemap. my repo is a few lines of code and I don't need any performance optimisation.
Related
I'd like to pull a Mercurial web repository into my local filesystem to work. I used the following command but I get the error: Merucrial Repository (.hg directory not found). But my boss said Mercurial is installed in the machine.
hg pull https://username#web_repository_name
What is the proper way to get a working copy of a Mercurial repository?
You need to start by executing a clone of the repository, after which you can use pull to incrementally add new changes.
These are really the basics of Mercurial, so I would propose you read this tutorial, followed by this Mercurial guide.
Summarised Question:
Are github-hosted sub repositories within a mercurial/kiln repository possible, and if so are they automatically updated/cloned when the parent mercurial repository is operated on by a hg clone or hg commit command?
Detailed Question:
Following on from my question that was answered so excellently here , some of my third party code is in folders I downloaded a while ago from opensource efforts on github. Since at that stage I was not using version control, those folders where just standard folders that now been incorporated as sub repositories in mercurial.
This is obviously not ideal, as for one thing, new versions of the libraries may have bug fixes, or new features I wish to use in the future. I also may need to locally customise some of the libraries.
I can see from reading this link that it possible to have mercurial "know" about those git server urls (and revisions), so I can then have mercurial clone the github hosted libraries direct from their parent repos.
Am I right in saying that when I clone the parent (mercurial) repos, those files will be pulled from github, without having to separately manage this using git?
What is also not clear is, if I were to do this, and it transpired that code might need to be customized from within that github-cloned repository, would I need to use git to manage revisions of the local files, or would mercurial do that by proxy? eg id I were to hg commit -S would mercurial invoke git on my behalf to handle that?
Am I right in saying that when I clone the parent (mercurial) repos, those files will be pulled from github, without having to separately manage this using git?
Yes, clone of a Mercurial repository that contain subrepositories will trigger a clone of the subrepos too. It really happens on update. Mercurial notices the .hgsub file and issues the needed hg clone and git clone commands for you. It uses the information in .hgsubstate to know exactly what revision to checkout.
The subrepositories can be hosted anywhere. For a Git subrepository declared like
foo = [git]https://github.com/user/repo.git
Mercurial will simply issue the corresponding clone command:
git clone https://github.com/user/repo.git foo
It's then your reponsibility to later go into the foo repo and use Git to fetch new commits as necessary. After you fetch/pull new commits, you can make a top-level commit to record the new state of the subrepo in the .hgsubstate file. Use hg summary to see if a subrepo is dirty in this sense.
[...] would I need to use git to manage revisions of the local files, or would mercurial do that by proxy? eg id I were to hg commit -S would mercurial invoke git on my behalf to handle that?
When you edit files and make a top-level hg commit, Mercurial will make sure to commit the subrepo first (if you use hg commit -S or if ui.commitsubrepos=True). If you make a top-level push, then Mercurial will always push the subrepos first so that you always have a consistent set of changes on your server.
I've just started with Mercurial, I have a 'central' repository on Bitbucket which I cloned onto one machine and made changes and committed and pushed. I then cloned from Bitbucket to another machine committed and pushed which was fine. I then came back to the first machine, made changes committed and attempted to push, but got the error message. What am I doing wrong? Should I have pulled first? How can I resolve the error and push? Any help is appreciated!
Darren.
A Mercurial repository gets its identity when you make the first commit in it. When you create a new repository on Bitbucket, you create an empty repository with no identity.
When you clone this repository to machine A and make a commit and push it back, then you brand the repository. If you have cloned the repository on the second machine before pushing from the first, then you can end up in the situation you describe.
Please run hg paths on the machine where you cannot push. Then make a separate clone of the repository it says it will push to. Now examine the first changeset in each repository with
hg log -r 0
If the initial changesets are different, then you have two unrelated repositories, as we call it in Mercurial. You can then export the changes you cannot push as patches and import them in the other.
If you're pretty sure the push path is correct, it may be worth it to just export your changes to patches from the problem repo, clone again from Bitbucket and then import the patches into the new repo. This will either just work or reveal a bad/corrupted commit.
I would like to share knowledge about Mercurial internals.
Repositories unrelated when they have no any same revisions.
Corresponding piece you can find in mercurial/treediscovery.py:
base = list(base)
if base == [nullid]:
if force:
repo.ui.warn(_("warning: repository is unrelated\n"))
else:
raise util.Abort(_("repository is unrelated"))
base is a list of roots of common parts in both local/remote repositories.
You always may know how repositories are different by:
$ hg in $REMOTE
$ hg out $REMOTE
You always may checks roots of both (after cloning both locally):
$ hg -R $ONE log -r "roots(all())"
$ hg -R $TWO log -r "roots(all())"
if output from above commands doesn't share IDs - those repositories are unrelated. Due to hash properties it is very impossible that roots be equal accidentally. You may not trick roots checking by carefully crafting repositories because building two repositories looks like these (with common parts but different roots):
0 <--- SHA-256-XXX <--- SHA-256-YYY <--- SHA-256-ZZZ
0 <--- SHA-256-YYY <--- SHA-256-ZZZ
impossible because that mean you reverse SHA-256 as each subsequent hash depends on previous values.
Having this info I believe any Devs be able to troubleshoot error: repository is unrelated.
See also Mercurial repository identification
Thanks for attention, good hacking!
You get this message when you try to push to a repository other than the one that you cloned. Double-check the address of the push, or the default path, if you're just using hg push by itself.
To check the default path, you can use hg showconfig | grep ^paths\.default (or just hg showconfig and look for the line that starts paths.default=).
After a change in my local repo, I commit with hg commit -m <message>, then I push to my server with hg push, then in my server I update working dir with hg update.
Is there a better way to do this?
The first two steps you have described:
hg commit -m <message>
hg push
are required as per the fact that commits are kept completely separate from the server in Mercurial (and most other DVCS as well). You could write a post-commit hook to perform the push after each commit, but this is not advised because it prevents you from correcting simple mistakes during the commit and before the push.
Because you're trying to perform an update on 'the server' I'm assuming you are executing a version of the code in your repository on the server. I'm assuming this because typically the server would simply act as a master repository for you and your developers to access (and also to be subject to backups, etc..), and would not need the explicit hg update.
Assuming you are executing code on the server, you can try and replace the push and the update with this command:
hg pull <path to development repo> -u
which will perform a pull from your local repo and then an automatic update. Depending on your server configuration, it might be difficult to get the path to your local repo.
For the first part of the question (ie. automatically push when you do a commit), you can use the trick described in this answer : mercurial automatic push on every commit .
If you want to automatically update the working directory, you can do this with a hook. Add this in the hgrc of your repository (.hg/hgrc on your server directory) :
[hooks]
changegroup = hg update >&2
This will automatically update the working directory every time a push is made to this server. This hook is described in the Mercurial FAQ.
If you use these 2 solutions, the next time you do hg commit -m "message", the commit will be automatically pushed to the remote server and the working directory on the server will be updated.
There is an extension called autosync you might find useful:
This extension provides the autosync command which automatically and continuously commits working copy changes, fetches (pull, merge, commit) changes from another repository and pushes local changes back to the other repository. Think of configuration files or to-do lists as examples for things to synchronize. On a higher level the autosync command not only synchronizes repositories but working copies. A central repository (usually without a working copy) must be used as synchronization hub:
Before explaining my problem let me tell you the Mercurial setup,
We have the following repos,
RELEASE
DEVELOPMENT
BUGFIX
All the above repo are running on a central server using IIS and hgwebdir.cgi
Now coming to the problem,
I clone a local repo from DEVELOPMENT repo.
I make changes to the clone and commit (Not push).
I make a bundle from the clone and pass the bundle to QA who has cloned the RELEASE repo.
Now I try to apply the bundle to the RELEASE repo clone using hg unbundle
I get an error, abort: error: ftp error: no host given
What am I doing wrong? Can you give solution to the above problem keeping a Windows setup in mind?
It really sounds like you have a syntax error in your unbundle command. The normal usage is just:
hg unbundle c:\path\to\the.bundle
there's no ftp involved unless you're trying to use a ftp:// URL which isn't supported. Is it possible you have a directory named ftp and the parser is mistakign it for a component in a ftp URL?
Also, most folks wouldn't use bundles in the scenario you're describing. They'd just do:
hg push URL-or-file-path-to-QA
and push direct to QA's own repo (not to RELEASE)
People generally use bundles only when a network connection isn't possible or practical.
I experienced the same problem, I don't think hg likes uncs.
I mapped \server\DevSourceCode\Mercurial to R: and it worked fine, see below:
R:\Repositories\myproj>hg unbundle \\server\DevSourceCode\Mercurial\ChangeBundles\myproj_changes.hg
abort: error: ftp error: no host given
R:\Repositories\myproj>hg unbundle R:\ChangeBundles\myproj_changes.hg
adding changesets
adding manifests
adding file changes
added 0 changesets with 0 changes to 139 files
(run 'hg update' to get a working copy)