Assume I recover a Mercurial repository from a broken file system (e.g. bad hard drive), and I want to be sure that this one was not affected.
How can I force a self-check in Mercurial? That is, Mercurial walks through the whole history and checks that all checksums fit their respective dataset, and that the repository as a whole is consistent.
Is it sufficient to perform a local "hg clone" to enforce that check?
It there something like "git fsck" for Mecurial?
The command for a pure check is:
hg verify
In case the repository is corrupt, the Mercural wiki provides recovery instructions:
https://www.mercurial-scm.org/wiki/RepositoryCorruption
Of course, this only checks the commits, not the working directory. That it, it neither checks local changes that were not yet committed, nor ignored files such as build results. All those can't be verified by Mercurial, of course. Those would either have to be verified by different means, or simply be reset using a fresh Mercurial checkout and a fresh build.
Related
Quite often in Mercurial, I happen to need local changes to a local repository, which should never ever enter the main repository. This could be (not complete list)
config files which need to differ on my PC
I want to mark my compilations so that I can distinguish versions compiled by me from official compilations
hgsub needs to differ on a PC without network access
Using TortoiseHG, my work scheme looks like this at the moment:
Commit all relevant changes, leaving the changes to kept local uncommited
Shelve changes to be kept local
Push or Pull
Deshelve
This works until I forget to exclude the changes to kept local,
which will happen sooner or later...
Then I have to waste time to restore the state before.
Is there any way to do this, e.g. by certain extensions?
Thanks for you help
You can use the secret hg phase for this.
commit your local changes
run hg phase -sf .
This will mark the current change secret. Secret changesets will not push.
A caveat with secret phases:
All secret changesets must always be on top of the stack. When you pull changes, you will need to rebase secret changes back to the top. This is typically as easy as hg rebase.
I need to control the version of a few files accessible via an SMB share. These files will be modified by several people. The files themselves are directly used by a web server.
Since these are production files I wanted to force the users to pull a local copy, edit them, commit and push them back. Unfortunately there is no Mercurial server on that machine.
What would be the appropriate way to configure Mercurial on my side so that:
the versioning (.hg directory) is kept on the share
and that the files on the share are at the latest version?
I do not have access to this server (other than via the share). If I could have a mercurial server on that machine I would have used a hook to update the files in the production directory (I am saying this just to highlight what I want to achieve - this approach is not possible as I do not control that server)
Thanks!
UPDATE: I ended up using an intermediate server (which I have control over). A hook on changegroup triggers a script which i) hg update to have fresh local files ii) copies them to the SMB share
EDIT 1 Following discussions in comments with alex I have looked at the verbose version of the command line output. The \\srv\hg\test1 repo has a [hooks] section with changegroup = hg update. The output from a hg push -v gives some insights:
pushing to \\srv\hg\test1
query 1; heads
(...)
updating the branch cache
running hook changegroup: hg update
'\\srv\hg\test1'
CMD.EXE was started with the above path as the current directory.
UNC paths are not supported. Defaulting to Windows directory.
abort: no repository found in 'C:\Windows' (.hg not found)!
warning: changegroup hook exited with status 255
checking for updated bookmarks
listing keys for "bookmarks"
If I understand correctly the output above:
a cmd.exe was triggered on the client, even though the [hook] was on the receiving server
it tried to update the remote repo
... but failed because UNC are not supported
So alex's answer was correct - it just does not work (yet?) on MS Windows. (Alex please correct me in the comments if I am wrong)
If I understood correctly, you are looking for two things:
A repository hook that will automatically update the production repo to the latest version whenever someone pushes to it. This is simple: You're looking for the answer to this question.
If you can rely on your co-workers to always go through the pull-commit-push process, you're done. If that's not the case, you need a way to prevent people from modifying the production files in place and never committing them.
Unfortunately, I don't think you can selectively withhold write permissions to the checked-out files (but not to the repo) on an SMB share. But you could discourage direct modification by making the location of the files less obvious. Perhaps you could direct people to a second repository, configured so that everything pushed to it is immediately pushed on to the production repository. This repo need not have a checked-out version of the files at all (create it with hg clone -U, or do an hg update -r 0 afterwards), eliminating the temptation to bypass mercurial.
What prevents you from mount your Samba share and run hg init there? You don't need mercurial server (hg serve or more sophisticated things) to perform push/pull operations.
I have a remote hg repository hosted on googlecode. Thus I don't have admin access to run e.g. lfconvert on it (as far as I know), and of course lfconvert can only be used on local repositories.
So, is there any way to a convert an googlecode hg repository to a largefile repository?
(one idea is to convert a local clone of the repo to a largefile repo and then push the changes to the "central" googlecode repo, but I fear trying that without knowing if it is a valid approach).
Using your idea to do a local conversion and push, you can take advantage of the 'reset' feature for your repositories:
Do a local clone.
Convert to largefiles: `hg lfconvert normal_repo largefiles_repo``. Do NOT delete the original clone until you are sure everything works.
Reset the hosted repository (See https://code.google.com/p/support/wiki/MercurialFAQ#Mercurial_FAQ).
Push the largefiles repository.
Pushing the largefiles repository without reseting seems problematic because the largefiles repository is essentially a fork of the original one starting at the point the first largefile was committed.
If the push fails*, you can push the original clone and you'll be back where you started without any data loss. (One of the many advantages of DVCS. :-))
The big downside of course is that everybody who has ever cloned your project will now be working from a different fork of the repository. This is always a danger when you do anything involving changing history and is the motivation for Mercurial phases. If you want to be 'kinder', you can start a second project for the largefiles version and place a link at the original project cite describing the move.
[*] I can't figure out from Google Code's documentation whether the largefiles extension is supported. There is a reviewed feature request, but I couldn't find any mention of the request actually being implemented. The push failing would probably be a good indication that largefiles isn't supported though...
I am developing a web database that is already in use for about a dozen separate installations, most of which I also manage. Each installation has a fair bit of local configuration and customization. Having just switched to mercurial from svn, I would like to take advantage of its distributed nature to keep track of local modifications. I have set up each installed server as its own repo (and configured apache not to serve the .hg directories).
My difficulty is that the development tree also contains local configuration, and I want to avoid placing every bit of it in an unversioned config file. So, how do I set things up to avoid propagating local configuration to the master repo and to the installed copies?
Example: I have a long config.ini file that should be versioned and distributed. The "clean" version contains placeholders for the database connection parameters, and I don't want the development server's passwords to end up in the repositories for the installed copies. But now and then I'll make changes (e.g., new defaults) that I do need to propagate. There are several files in a similar situation.
The best I could work out so far involves installing mq and turning the local modifications into a patch (two patches, actually, with logically separate changesets). Every time I want to commit a regular changeset to the local repo, I need to pop all patches, commit the modifications, and re-apply the patches. When I'm ready to push to the master repo, I must again pop the patches, push, and re-apply them. This is all convoluted and error-prone.
The only other alternative I can see is to forget about push and only propagate changesets as patches, which seems like an even worse solution. Can someone suggest a better set-up? I can't imagine that this is such an unusual configuration, but I haven't found anything about it.
Edit: After following up on the suggestions here, I'm coming to the conclusion that named branches plus rebase provide a simple and workable solution. I've added a description in the form of my own answer. Please take a look.
From your comments, it looks like you are already familiar with the best practice for dealing with this: version a configuration template, and keep the actual configuration unversioned.
But since you aren't happy with that solution, here is another one you can try:
Mercurial 2.1 introduced the concept of Phases. The phase is changeset metadata marking it as "secret", "draft" or "public". Normally this metadata is used and manipulated automatically by mercurial and its extensions without the user needing to be aware of it.
However, if you made a changeset 1234 which you never want to push to other repositories, you can enforce this by manually marking it as secret like this:
hg phase --force --secret -r 1234
If you then try to push to another repository, it will be ignored with this warning:
pushing to http://example.com/some/other/repository
searching for changes
no changes found (ignored 1 secret changesets)
This solution allows you to
version the local configuration changes
prevent those changes from being pushed accidentally
merge your local changes with other changes which you pull in
The big downside is of course that you cannot push changes which you made on top of this secret changeset (because that would push the secret changeset along). You'll have to rebase any such changes before you can push them.
If the problem with a versioned template and an unversioned local copy is that changes to the template don't make it into the local copies, how about modifying your app to use an unversioned localconfig.ini and fallback to a versioned config.ini for missing parameters. This way new default parameters can be added to config.ini and be propagated into your app.
Having followed up on the suggestions here, I came to the conclusion that named branches plus rebase provide a simple and reliable solution. I've been using the following method for some time now and it works very well. Basically, the history around the local changes is separated into named branches which can be easily rearranged with rebase.
I use a branch local for configuration information. When all my repos support Phases, I'll mark the local branch secret; but the method works without it. local depends on default, but default does not depend on local so it can be pushed independently (with hg push -r default). Here's how it works:
Suppose the main line of development is in the default branch. (You could have more branches; this is for concreteness). There is a master (stable) repo that does not contain passwords etc.:
---o--o--o (default)
In each deployed (non-development) clone, I create a branch local and commit all local state to it.
...o--o--o (default)
\
L--L (local)
Updates from upstream will always be in default. Whenever I pull updates, I merge them into local (n is a sequence of new updates):
...o--o--o--n--n (default)
\ \
L--L--N (local)
The local branch tracks the evolution of default, and I can still return to old configurations if something goes wrong.
On the development server, I start with the same set-up: a local branch with config settings as above. This will never be pushed. But at the tip of local I create a third branch, dev. This is where new development happens.
...o--o (default)
\
L--L (local)
\
d--d--d (dev)
When I am ready to publish some features to the main repository, I first rebase the entire dev branch onto the tip of default:
hg rebase --source "min(branch('dev'))" --dest default --detach
The previous tree becomes:
...o--o--d--d--d (default)
\
L--L (local)
The rebased changesets now belong to branch default. (With feature branches, add --keepbranches to the rebase command to retain the branch name). The new features no longer have any ancestors in local, and I can publish them with push -r default without dragging along the local revisions. (Never merge from local into default; only the other way around). If you forget to say -r default when pushing, no problem: Your push gets rejected since it would add a new head.
On the development server, I merge the rebased revs into local as if I'd just pulled them:
...o--o--d--d--d (default)
\ \
L--L-----N (local)
I can now create a new dev branch on top of local, and continue development.
This has the benefits that I can develop on a version-controlled, configured setup; that I don't need to mess with patches; that previous configuration stages remain in the history (if my webserver stops working after an update, I can update back to a configured version); and that I only rebase once, when I'm ready to publish changes. The rebasing and subsequent merge might lead to conflicts if a revision conflicts with local configuration changes; but if that's going to happen, it's better if they occur when merge facilities can help resolve them.
1 Mercurial have (follow-up to comments) selective (string-based) commit - see Record Extension
2 Local changes inside versioned public files can be easy received with MQ Extension (I do it for site-configs all time). Your headache with MQ
Every time I want to commit a regular changeset to the local repo, I
need to pop all patches, commit the modifications, and re-apply the
patches. When I'm ready to push to the master repo, I must again pop
the patches, push, and re-apply them.
is a result of not polished workflow and (some) misinterpretation. If you want commit without MQ-patches - don't do it by hand. Add alias for commit, which qop -all + commit and use this new command only. And when you push, you may don't worry about MQ-state - you push changesets from repo, not WC state. Local repo can also be protected without alias by pre-commit hook checking content.
3 You can try LocalBranches extension, where your local changes stored inside local branches (and merge branches on changes) - I found this way more troublesome, compared to MQ
I want to do the equivalent of svn export REMOTE_URL with a mercurial repository. What I want at the end is an unversioned snapshot of the repository at the remote URL, but without cloning all of the changesets over to my local machine.
Also, I want to be able to specify a tag in the remote repository to pick this from. If it's not obvious, I'm building a release management tool that pulls from a canonical mercurial repository to build a release file, and it's slow right now because some projects have large, multiple-version binary files committed.
Is this possible? How would one go about it?
Its usually easier (if the remote HG is using the hgweb interface) to just visit the repo in your browser and download a .tgz / .zip / .bz2 of the tip revision. You'll see the links if the remote HG supports this.
If you want the repository, you need all of the revisions that went into the current tip for it to be at all functional.
There are options to hg clone that allow you to fetch a repository up to a certain revision, but none (that I could find) that allow you to get just the tip revision. What you are essentially asking for is a snapshot of the repo.
Edit: To Get A Snapshot
hg clone http[s]://url.to.repo repo.hg
cd repo.hg
hg archive ../repo-snapshot
cd ..
rm -rf repo.hg
The snapshot is now in repo-snapshot.
Yes, this does entail cloning the repo first, which is why I suggested seeing if the remote hgweb supports on the fly downloads of any particular revision. If it does, your problem is solved with something like curl or wget instead of HG.
If not, its good to let the original repo 'live' since you can update it again later via hg pull, then create another snapshot of a future release. This saves having to start over from scratch when cloning, especially for large repositories with lots of changes.
Also, Linux centric, but you get the gist. Of course, replace http[s] with the desired protocol as needed.
Is there any reason you can't maintain a mirror (updated in the background however often you want) of the remote repository on your local machine, then have the release management tool on your local machine run hg archive out of the local clone as necessary? If your concern is user-responsiveness, and not total bandwidth/storage consumed, this offsets the "slow" part to where you won't see it.
Tim Post noted that if you do have the hgweb CGI interface available, you can configure it to pull compressed archives down and unpack them (and the interface is consistent enough that you could script that via wget), but if you don't, core Mercurial doesn't have a lot of tools to help you, and the developers have expressed an opposition to trying to turn Mercurial into a general rsync-type client.
If you aren't afraid of playing with unofficial add-ons, you could have a look at the FTP Extension. That will force you to push from the server, however.