I'm not sure if this is a problem with Mercurial or TortoiseHG (both version 3.6.1) but I've got an issue where revisions pulled from our RhodeCode server have recently started being given "draft" phase on one particular development machine.
My own machine is pulling revisions and marking them correctly as "public", but my colleague always get "draft". We are unaware of any recent changes to his configuration, and we've compared mercurial.ini and the hgrc files between our machines.
I'm still fairly clueless when it comes to Mercurial (despite having used it for the last 4 years), so I'm struggling to understand exactly what's happening.
Is there any particular setting (either in Mecurial or TortoiseHG) that would mean a revision pulled by me results in a local "public" phase, but when pulled by my colleague results in local "draft" phase?
There is a setting to check:
Sometimes it may be desirable to push and pull changesets in the draft phase to share unfinished work. This can be done by setting a repository to disable publishing in its configuration file:
[phases]
publish = False
Ref: https://www.selenic.com/mercurial/hg.1.html#phases
This must be related to RhodeCode Phases control, Please check with your super admin for versions < 3.7 if they didn't un-set "set repositories as publishing" globally, if it's 3.7+ each repository has it's own Phases control under settings -> vcs.
Related
How can I hinder mercurial from putting changesets to phase “public” on push operations? I want them to stay “draft”.
I rebase and histedit a lot, and the repository I push to is for me only. And having to change the phase all the time is a nuisance.
What the documentation does not clearly reveal is:
The phase-change on push is not a purely local decision. – After “uploading” the changesets, the client asks the server for updates regarding the phases of the commits, and the server is usually telling that they are now “public”.
Thus, the .hgrc-snippet
[phases]
publish = False
has to be put on the server, which inhibits the usual phase-change there. The server will then report the phases back the same way they were pushed.
Bitbucket has an option for this under Settings → Repository details → Phases.
The most direct way to keep the phase at draft is to configure the remote server as "non-publishing", as you have already discovered.
But there is a second way, which may be useful to some if the destination server cannot be set to "non-publishing" for any reason: Use pull instead of push. Pulling is read-only, so if you can set up your workflow (e.g. through a local alias) so that the remote pulls changes from your local repo, they'll remain in phase draft.
https://www.mercurial-scm.org/wiki/Phases
A repository is "publishing" by default. To make a repository non-publishing, add these lines to its hgrc configuration:
[phases]
publish = False
Short answer: nohow
If you want rewrite history both locally and on push-target, you have to
enable (on both sides)
understand
use Evolve Extension (still experimental)
I have a mercurial repository at c:\Dropbox\code. I've created a clone of this repo locally using:
hg clone -U c:\Dropbox\code c:\GoogleDrive\codeBackup
This bare repo serves the purpose of backup only. I regularly push changes to codeBackup. Furthermore, both the directories are backed-up in the cloud (Dropbox & Google Drive respectively).
If my repo in code becomes corrupt would the codeBackup repo automatically be corrupt since the clone operation used hard links to the original repo? Thus my double-cloud-backup strategy would be useless?
P.S. : I understand that the fall back option is to use the cloud service to restore a previous known good state.
UPDATE : After digging around, I'll add these for reference
Discussion on repo corruption in mercurial
The problem is, if a 'hg clone' was done (without --pull option), then
the destination and the source repo share files inside .hg/store by
using hardlinks 1, if the filesystem provides the hardlinking
feature (NTFS does).
Mercurial is designed to break such hardlinks inside .hg if a commit
or push is done to one of the clones. The prerequisite for this is,
that the Windows API mercurial is using should give a correct answer,
if mercurial is asking "how many hardlinks are on this file?".
We found out that this answer is almost always wrong (always reporting
1, even if it is in fact >1) iff the hg process is running on one
Windows computer and the repository files are on a network share on a
different Windows computer.
To avoid hardlinks (use --pull):
hg clone -U --pull c:\Dropbox\code c:\GoogleDrive\codeBackup
To check for hardlinks:
fsutil hardlink list <file> : Shows all hardlinks for <file>
find . -links +1 : Shows all files with hardlinks > 1
ls -l : shows hardlinks count next to each file
The biggest problem here, regarding repository corruption, is that you're using Dropbox and Google Drive to synchronize repositories across machines.
Don't do that!
This will surely lead to repository corruption unless you can guarantee that:
Your machines will never lose internet connection
You will never have new changes unsynchronized on more than one machine at a time (including times where you have had internet problems)
That Dropbox will always run (variant of never lose internet connection)
You're not just plain unlucky regarding timing
To verify that Dropbox can easily lead to repository corruption, do the following:
Navigate to a folder inside your Dropbox or Google Drive folder and create a Mercurial repository here. Do this on one machine, let's call this machine A.
Add 3 text files to it, with some content (not empty), and commit those 3 text files.
Wait for Dropbox/Google Drive to synchronize all those files onto your second computer, let's call this machine B
Either disconnect the internet on one of the machines, or stop Dropbox/Google Drive on it (doesn't matter which one)
On Machine A, change file 1 and 2, by adding or modifying content in them. On Machine B, change file 2 and 3, making sure to add/modify in some different content from what you did on machine A. Commit all the changes on both machines.
Reconnect to the internet or restart Dropbox/Google Drive, depending on what you did in step 4
Wait for synchronization to complete (Dropbox will show a green checkmark in its tray icon, unsure what Google Drive will display)
Run hg verify in the repositories on both machine A and B
Notice that they are now both corrupt:
D:\Dropbox\Temp\repotest>hg verify
checking changesets
checking manifests
crosschecking files in changesets and manifests
checking files
3.txt#?: rev 1 points to unexpected changeset 1
(expected 0)
3.txt#?: 89ab3388d4d1 not in manifests
3 files, 2 changesets, 6 total revisions
1 warnings encountered!
2 integrity errors encountered!
Instead get a free bitbucket or kiln account and use that to push and pull between to synchronize across multiple computers.
The only way you code repository can become corrupt (assuming it was not corrupt when you initially cloned it over to codeBackup) is when you write something to it, be it committing, rewriting history, etc. Whenever something gets written to a hard-linked file, Mercurial first breaks the hard link, creates an independent copy of the file and then only modifies that newly created copy.
So to answer your questions: under normal usage scenarios repository corruption will not propagate to your codeBackup repository.
I need to control the version of a few files accessible via an SMB share. These files will be modified by several people. The files themselves are directly used by a web server.
Since these are production files I wanted to force the users to pull a local copy, edit them, commit and push them back. Unfortunately there is no Mercurial server on that machine.
What would be the appropriate way to configure Mercurial on my side so that:
the versioning (.hg directory) is kept on the share
and that the files on the share are at the latest version?
I do not have access to this server (other than via the share). If I could have a mercurial server on that machine I would have used a hook to update the files in the production directory (I am saying this just to highlight what I want to achieve - this approach is not possible as I do not control that server)
Thanks!
UPDATE: I ended up using an intermediate server (which I have control over). A hook on changegroup triggers a script which i) hg update to have fresh local files ii) copies them to the SMB share
EDIT 1 Following discussions in comments with alex I have looked at the verbose version of the command line output. The \\srv\hg\test1 repo has a [hooks] section with changegroup = hg update. The output from a hg push -v gives some insights:
pushing to \\srv\hg\test1
query 1; heads
(...)
updating the branch cache
running hook changegroup: hg update
'\\srv\hg\test1'
CMD.EXE was started with the above path as the current directory.
UNC paths are not supported. Defaulting to Windows directory.
abort: no repository found in 'C:\Windows' (.hg not found)!
warning: changegroup hook exited with status 255
checking for updated bookmarks
listing keys for "bookmarks"
If I understand correctly the output above:
a cmd.exe was triggered on the client, even though the [hook] was on the receiving server
it tried to update the remote repo
... but failed because UNC are not supported
So alex's answer was correct - it just does not work (yet?) on MS Windows. (Alex please correct me in the comments if I am wrong)
If I understood correctly, you are looking for two things:
A repository hook that will automatically update the production repo to the latest version whenever someone pushes to it. This is simple: You're looking for the answer to this question.
If you can rely on your co-workers to always go through the pull-commit-push process, you're done. If that's not the case, you need a way to prevent people from modifying the production files in place and never committing them.
Unfortunately, I don't think you can selectively withhold write permissions to the checked-out files (but not to the repo) on an SMB share. But you could discourage direct modification by making the location of the files less obvious. Perhaps you could direct people to a second repository, configured so that everything pushed to it is immediately pushed on to the production repository. This repo need not have a checked-out version of the files at all (create it with hg clone -U, or do an hg update -r 0 afterwards), eliminating the temptation to bypass mercurial.
What prevents you from mount your Samba share and run hg init there? You don't need mercurial server (hg serve or more sophisticated things) to perform push/pull operations.
I have a remote hg repository hosted on googlecode. Thus I don't have admin access to run e.g. lfconvert on it (as far as I know), and of course lfconvert can only be used on local repositories.
So, is there any way to a convert an googlecode hg repository to a largefile repository?
(one idea is to convert a local clone of the repo to a largefile repo and then push the changes to the "central" googlecode repo, but I fear trying that without knowing if it is a valid approach).
Using your idea to do a local conversion and push, you can take advantage of the 'reset' feature for your repositories:
Do a local clone.
Convert to largefiles: `hg lfconvert normal_repo largefiles_repo``. Do NOT delete the original clone until you are sure everything works.
Reset the hosted repository (See https://code.google.com/p/support/wiki/MercurialFAQ#Mercurial_FAQ).
Push the largefiles repository.
Pushing the largefiles repository without reseting seems problematic because the largefiles repository is essentially a fork of the original one starting at the point the first largefile was committed.
If the push fails*, you can push the original clone and you'll be back where you started without any data loss. (One of the many advantages of DVCS. :-))
The big downside of course is that everybody who has ever cloned your project will now be working from a different fork of the repository. This is always a danger when you do anything involving changing history and is the motivation for Mercurial phases. If you want to be 'kinder', you can start a second project for the largefiles version and place a link at the original project cite describing the move.
[*] I can't figure out from Google Code's documentation whether the largefiles extension is supported. There is a reviewed feature request, but I couldn't find any mention of the request actually being implemented. The push failing would probably be a good indication that largefiles isn't supported though...
I’m new to mercurial, about 2 months now. We are using it on a new project and tried to create a new repo, a clone of the trunk, to be used as release “branch”.
We use a central repo, everyone is pulling/pushing to/from it over https using hgwebdir.cgi. Using on server hg 1.5.4 and “clients” various versions, 1.5.2 -> 1.6.3
Everything thing was ok. The clone was good (hg verify after clone), the only problem is that very soon this repo got corrupted (empty or missing ; in manifests not found).
The main repo is ok, only this release get broken very soon.
The names of the repos are (folder names and published names, all reside in the same root folder):
A.B – for the trunk
A.B.Release – for the release repo
(read something in the docs, which sounded like this might be a issue – see
One other very strange thing is that checkins made only to trunk (A.B) are seen as available on the release branch, and they are displayed as errors on verify ( in manifests not found). Don’t understand how these got there.
Any clues?
It's not an answer, but I'll state that what you're doing is definitely supposed to work. Making sure the wire-protocol has full backwards compatibility is very important to the Mercurial folks.
The "cross-talk" between your two repos is very concerning and shouldn't happen unless someone erroniously used the share extension.
What if you try creating the A.B.Release clone by using clone --pull rather than clone by itself?