Setting up a mercurial mirror - mercurial

Can anybody tell me how to set up a mirror of a mercurial repository? I have a mercurial repo on my laptop, but want to auto mirror the repo on a NAS drive as a form of backup. Ideally, it would be cool if the solution checks a known location for a repo, and if one doesn't exist, create it, and from then on mirror any changes.
Another thing to bear in mind is that the NAS may not always be available, so I would need to accomodate this in some way.

I did something similar with git, but all the functionality should be in mercurial too.
I created manually a clone on some server (in my case a VPS somewhere on the net in case my house burns down with NAS and laptops in it).
With git you can create a "naked" repository, i.e. w/o a branch checked out.
Then I regularly push to it.
This can be automated using 'hooks', more info here .
The trick is to get the handling off the commit hook (oun intended) and that the syncing is not in your workflow. Run your push script using the 'at' command in a couple of minutes time. Then it runs asynchronously in the background. I would not be fancy here, try and handle failures gracefully.
You now have a setup which will keep the backup synched within a couple of minutes.

Mercurial gives you the freedom to do that however you would like. If you wanted, you could just setup a process to copy the repo from your local machine to the NAS at a regular interval. Everything about the repo is stored in the directory, and everything in the directory is just a file.
However, it sounds to me like you want to setup something more akin to a version control system like Subversion. I do something like this with one of my projects (actually, I moved it from SVN to Mercurial, but that's a different answer).
I have a repository on xp-dev.com and my local repository on my computer. I do all of the work on my local repository I want to do, issuing hg com very frequently. When I am done for the day/night I do a hg push ssh://hg2.xp-dev.com/myrepo to send all of my local changes to the remote server.
So, really all you want to do is an hg push to put your local repo on your NAS and then remember to do it again on a regular basis.

Related

Mercurial - how to make a local central repository

I would like to have a central repository in a directory on my local computer without setting up a server.
Context: I am working with my boss on a local server inside our LAN. We both are using a vnc connection onto the server, and are doing our work there for simplicity reasons. I would like to set it up so that I can have a copy of my scripts for development, and then when I get to a release, I will push it to a different directory that my boss can then run them from (or even better pull from to his own set and run from there).
I read that you can create a hg server by running 'hg serve', but I do not want to open it up to the LAN, because I don't want it to be accessible.
I tried running 'hg push /home/source' and it gave me an error.
I then ran 'hg init' while in that directory and tried again. It looked like it worked, and then didn't show any files in the directory. I ran status and it showed nothing, and then ran log and it showed the commits.
... without setting up a server
One way I've used to share a "central" Mercurial repository without having to deal with any "server" issues is to have the "central" repository in a folder on Dropbox.
For example, suppose:
your repository is named "repo" and that your "private" copy is in ~/repo
your Dropbox directory on your computer is ~/Dropbox/
Then:
cd ~/Dropbox
hg clone ~/repo
Now suppose you make some changes in ~/repo. You can then "push" them from ~/repo to ~/Dropbox/repo, or (more easily, as explained below) "pull" them into ~/Dropbox/repo when you're ready.
To make updating the "central" repository convenient, you might like to create a script such as:
#!/bin/bash
cd ~/Dropbox/repo
hg tip
hg pull -u
hg tip
Notice that in the script, there is no need to specify the source from which to pull; the hgrc file that's created when you created the clone keeps track of that. (Thank you, hg.)
If your colleague has direct access to a folder on your computer, then you could still adopt the strategy described above, without using Dropbox.
Needless to say, there are many variations.
Needless to say also, if more than one person attempts to commit changes to the shared folder, chaos can easily ensue.

Strip a Mercurial branch on the server side

We have a SOLUTION folder (Mercurial repository) in whitch we have a PROJECT folder, that is also a Mercurial repository.
So two repositories: one - the root(solution) folder and other - a subfolder of the root folder(the project) (yes strange but it is like this)...
Everything worked, but one day someone somehow included the SOLUTION branch into the PROJECT repository... So all the history from the Solution branch was included in parralel with the Project branch into the PROJECT repository....
Now is a little mess in the PROJECT repository... There is need to clean that repository...
Locally it worked by applying the hg strip rev XXS (where XXS was the revision number of the very first node from the freshly added Solution branch in the Project repository).
But it seems there is no strip equivalent on the server?!
Every time we'll pull incoming changes in the Project repository, the "Solution" branch will be re-imported....
Is there a way to manage it on the server side?
Of course the same solution would also work on the server. Thus you need login access to the server itself to execute the same local history operation on it. But for the default setup (publishing server) a push will never remove changesets which are present on a remote location; when you history edit your local repository, the changes will not all propagate: only additions to the graph will, but no deletions.
If such changes to the remote server are expected to be pushed, and this is a regular thing, you might want to look into use of phases and how to setup a non-publishing server, e.g. a server with mutable history: Phases#Publishing_Repository.
Mind that such a workflow also means that every single one of the people with push privilige has to change their default phase to 'draft' instead of 'public' - at least for that project.
kill the server repo. start a fresh one, then from local:
hg push -rev XXR
where XXR is the last rev you want to keep.

push to configured hg repository from web interface

I have a small group of developers and we all develop on our own machines. When we have code that is ready for testing, we merge and push to a RhodeCode installation. The hgrc file for my central RhodeCode repo is set up like this:
[paths]
test_env = /www/mysite/test
prod_env = /www/mysite/prod
[hooks]
changegroup = hg push test_env
so when a person checks code into RhodeCode, the changes are automatically pushed to the test environment. (There's a hg update in the test repo hgrc file, so the code updates there). This is perfect.
BUT.. I would like our RhodeCode admins to be able to push to prod without needing shell access on the server. Is there a way to allow someone to run a "hg push prod_env" from the RhodeCode interface? I figure since RhodeCode has full control over hg, it should be possible, but does this ability exists somewhere in RhodeCode? would it be a huge task to add it?
If not, how would you go about allowing an authenticated user to push a repository to production without shell access? I have been googling, but I can't seem to find anything. I know I could write a php script with a passthru("hg push test_env), but that seems like a permissions nightmare as apache runs as "nobody" and rhodecode owns the repo.
Thoughts?
Obviously, you cannot push nothing. But you can try to add or edit some file from the RhodeCode interface (which allows this to do) at the prod_env. This should cause local commit and push without accessing a shell.
For those looking at this question, here's how I solved it:
Wrote a passworded page in PHP with a button that executes this code:
shell_exec('hg pull -R ../wp-content/themes/2014');
I then put hg update in the hgrc file for the prod website, and made the web user and authorized user of the repository.
It works pretty good - i have slight security concerns because of the resulting file ownership, but assuming the PHP follows proper practice, there aren't any problems.

Mercurial repository on FTP

I wonder, if it's possible to create and serve to the clients Mercurial repository on the
some FTP folder with RW access . Did someone do a thing like that ?
Thank you in advance.
Just for the sake of completeness, because I had the same problem and feel that there is another, much simpler solution:
Mercurial cloning on local folders "just works", so if you mounted the ftp as a local folder or drive, you could just push/pull/clone to that (and have your repository end up on the ftp).
On Windows, you can e.g. use FTPUse or NetDrive to have your FTP folder mounted as a local drive, the former is free but a CLI tool which removes the virtual drives if the program is closed, the latter has a GUI but is only free for personal use and doesn't work (yet) on Win8. I don't have a Linux machine at hand now, but you should be able to achieve the same using ftpfs.
Once you did it (and your ftp server is now mapped e.g. to f:), you can simply use that virtual drive (or any subfolder) as a remote target for your mercurial operations. Works like a charm for me.
All things are possible. But that would be hard.
The bit where the network transport matters is when cloning a repository, and the standard ways of doing that depend on either serving over HTTP, or having SSH access to the repository host. There's no FTP-based transport for cloning as far as I can see.
If that's the only sharing mechanism you have available, then you could probably work something out using Mercurial bundles. The procedure would be something like the following:
Commit your edits to a local repository
Make a bundle using hg bundle --all my-bundle.hg
FTP my-bundle.hg to the server
The other users of the repository can then use FTP to retrieve the my-bundle.hg file to their local machine, go to their local copy of the repository, and then hg pull my-bundle.hg to pull in any revisions which are in the bundle but not in the local repository. When they want to share their changes, they make a fresh bundle as above, and push that back to the server. The --all option puts all of the changesets into the bundle file -- you can be cleverer and only export 'recent' changes, but that gets a little more complicated and risks losing changesets: using --all is brutal but fail-safe.
There's obviously a fair amount of scope for confusion here, and race conditions (timestamped filenames might help), and hair-pulling-out, and your users would doubtless appreciate some scripts to make this easier, but if all you've got available is an FTP server, you don't have very many options.
Good luck.
This question on SuperUser might be interesting. The core idea seem to evolve around running a background process that synchronizes a local folder with a remote ftp folder. Which might of use to you.
But I dont know what happens when more than one user tries synchronize at the same time. Since using this approach bypasses all the protection that mercurial has regarding locking the tree and such.

How to best set up Mercurial on a Clearcase static view? (Set up "checkout" hooks?)

I'd like to set up a mercurial repository in a clearcase static view directory. My plan is to clone from that directory, do all my real work in a mercurial repo and then push my changes back to the shared Hg/Clearcase dir.
I'd like to hear general suggestions on how this might work best, but I foresee one specific problem: Clearcase locks files as read-only until they are checked-out. They way I'd like it to work is to set up a mercurial hook to checkout the file before the push is completed and roll-back the push if the checkout doesn't work.
Should I be looking at the pretxncommit hook? Or the pull hook? Also, I'm not quite clear on how to write the actual hooks either. I know the clearcase command, but I'm not sure how to contruct the hook to pass in the filename for each file in the changeset.
Suggestions?
The question I just answered 2 days ago: How to bridge git to ClearCase? can gives you an illustration of the process.
I like to take the ClearCase checkout/checkin step separate from the DVCS work:
I will unlock files as I need them within the DVCS repo (made directly within the snapshot view), and then update the snapshot view, which will tells me the "hijacked" files (which I can the easily checkout and checkin through the cleartool update GUI).
But if you have clone you DVCS repo somewhere else, and push it back to a local repo which is not the ClearCase snapshot view, what you could do is simply copy back the view.dat hidden file of your snapshot view at the root directory of the DVCS repo.
That simple file is enough to transform back the local repo in a ClearCase snashot view!
Then you make all the files read-only (except those modified after a certain date, i.e. the time when you started working), to avoid ClearCase considering all the files as hijacked.
The rest is similar to the first approach: update, checkout/checkin.