I'm using Mercurial as my SCM, mainly because I like the ability to commit changes to a project even while offline. I'm going to be moving PCs soon and so I decided to look into finding some free Mercurial repo hosting so I don't lose my data. I signed up for a Bitbucket account and I noticed that they offer space for only a single private repository with their free accounts. Wouldn't that mean that some of my stuff might end up publicly available? As in, anyone can download and use it in their projects?
Yes, BitBucket only offers one private repository for free.
Edit: See the comments below, BitBucket now offers as many private repos as you want, the restriction on the free plans is a maximum of 5 users accessing these private repos.
However, if you're just worried about transferring the projects to your new machine, I think BitBucket is overkill. Will you be in possession of both machines at once, even for a short period of time?
If so, I would just use hg serve in each project directory (one at a time) of the old machine and hg clone http://ip.of.old.machine:8000/ projectname to clone the changes onto the new machine.
EDIT: If you're looking for a way to back up without sharing the repos publicly you could get a Dropbox account and clone a copy of each repo to the Dropbox folder on your local machine. Whenever you push changes they'll get synced up to Dropbox automatically.
If you computer catches on fire and you replace it you just install Dropbox and then clone from the repos in the Dropbox folder to your preferred location.
I'm not sure how well this would work if you want to use the Dropbox copy of the repo on multiple platforms (from a Windows box and a Linux box, for example).
run hg bundle --all in all your repositories, stuff the bundles somewhere (like a usb stick), hg unbundle them on the new machine.
Yes, unless you put them all under a single repository. Otherwise, you need to pay them for more private repos.
This must have changed since - you can now have unlimited public and private repos :-)
As an alternative, JavaForge allows hosting private projects.
Related
We have a dedicated issue tracking (Redmine) machine, which has a Mercurial repository (call it "Redmine repository"). Redmine is set up to use that repository, and as far as I understand, Redmine never makes any changes to that repository. All developers (eventually) push their changes to that repository.
We also have a dedicated production machine, which can execute the code, but is not used to make any changes to the code.
We have two choices:
Set up another Mercurial repository on the production machine (call it "production repository"). When a new production release is approved, pull the changes from the Redmine repository to the production repository, and then update the local working directory to the appropriate revision from the production repository.
Reuse the existing Redmine repository on the production machine designating it a local repository for the Mercurial installation there (the Redmine repository is on the shared drive that can be easily mounted on the production machine). Whenever a new production is approved, update the local working directory to the appropriate revision from the Redmine repository.
With option #2, we get rid of an extra "pull" step (from Redmine repository to production repository), which slightly simplifies the process. But I'm not sure if it's ok that a single repository is used by two Mercurial installations as if it's local.
Any comments on this choice (or any other aspect of this setup) is appreciated!
It sounds like a bad idea. Mercurial does a really good job of keeping reads and writes to its repository atomic, but it has a harder time doing that when the repository is on a shared drive -- even if it's only one local repository using it -- because network shares (especially on Windows) don't always make things atomic that they say they do.
Ideally your repositories (both the working dir and the repository) are local when possible, and you use push/pull to get changesets to/from a network share. If that's not possible then having a single local application using the repo on the remote file system is the best idea.
If you positively want to try having two clones using the same underlying repository check out the ShareExtension, which ships with Mercurial but is for advanced users only.
Instead of trying to piggy-back, why not just put a hook like this in your redmine repository:
[hooks]
changegroup = hg push //production/clone
That will automatically push changesets that arrive in redmine to production.
Basically I am new to using mercurial in a small team environment. I am looking for a way (3rd party if necessary to publish my change-sets/revisions to a public staging and public live server(s).
Currently I have set up on our local ubuntu server, using wildcard DNS, a directory with folders, each folder contains a project. Inside of the folder I create another folder called "repo", this stores the local clean version of my website. Then I clone from that local folder into a custom one, do my work, and push it back into the before mentioned "repo" folder.
Next that "repo" folder connects to a 3rd party site bitbucket. That's so I can work off site.
What I want to figure it out is if there are open source or something to allow me in a web interface, see my revision and select to publish it to one of the 2 server locations. I know beanstalk can do it, but I really like bitbucket and it's cost effective. I have about 15-25 different repositories.
Is my process too much? How can I make this process the most efficient as possible.
Are you using hgweb? It's not clear from your description that you are, and you certainly should be.
That aside, cloning from a central-ish repo to a local working clone, modifying, committing, and pushing bash to the central-ish repo sounds pretty normal.
Why use a web interface to push from your central-ish private repo to your public site? Why not just go from your working clone to the public site?
For example on my local machine (not a web server at all) in the repository for my blog I have this in the .hg/hgrc file:
[paths]
default = ssh://ry4an.org/projects/unblog
publish = https://ry4an.org/hg/unblog
If I do hg pull I get changes from the private repo on my ry4an.org server in my home directory. Then I edit, commit, and hg push which again goes to my private repo on my remote server. When I want to actually publish the blog entry I do hg push publish which pushes the changesets from my local working repository to the public one (at http://ry4an.org/hg/unblog which is the live content for http://ry4an.org/unblog)
In theory I could use a web interface of some sort to move changesets from ssh://ry4an.org/projects/unblog to https://ry4an.org/hg/unblog, but sending them from my local working clone gives me better tools (hg incoming, hg outgoing, hg log, etc.)
Am I fundamentally misunderstanding your goals or was that helpful?
I want to do the equivalent of svn export REMOTE_URL with a mercurial repository. What I want at the end is an unversioned snapshot of the repository at the remote URL, but without cloning all of the changesets over to my local machine.
Also, I want to be able to specify a tag in the remote repository to pick this from. If it's not obvious, I'm building a release management tool that pulls from a canonical mercurial repository to build a release file, and it's slow right now because some projects have large, multiple-version binary files committed.
Is this possible? How would one go about it?
Its usually easier (if the remote HG is using the hgweb interface) to just visit the repo in your browser and download a .tgz / .zip / .bz2 of the tip revision. You'll see the links if the remote HG supports this.
If you want the repository, you need all of the revisions that went into the current tip for it to be at all functional.
There are options to hg clone that allow you to fetch a repository up to a certain revision, but none (that I could find) that allow you to get just the tip revision. What you are essentially asking for is a snapshot of the repo.
Edit: To Get A Snapshot
hg clone http[s]://url.to.repo repo.hg
cd repo.hg
hg archive ../repo-snapshot
cd ..
rm -rf repo.hg
The snapshot is now in repo-snapshot.
Yes, this does entail cloning the repo first, which is why I suggested seeing if the remote hgweb supports on the fly downloads of any particular revision. If it does, your problem is solved with something like curl or wget instead of HG.
If not, its good to let the original repo 'live' since you can update it again later via hg pull, then create another snapshot of a future release. This saves having to start over from scratch when cloning, especially for large repositories with lots of changes.
Also, Linux centric, but you get the gist. Of course, replace http[s] with the desired protocol as needed.
Is there any reason you can't maintain a mirror (updated in the background however often you want) of the remote repository on your local machine, then have the release management tool on your local machine run hg archive out of the local clone as necessary? If your concern is user-responsiveness, and not total bandwidth/storage consumed, this offsets the "slow" part to where you won't see it.
Tim Post noted that if you do have the hgweb CGI interface available, you can configure it to pull compressed archives down and unpack them (and the interface is consistent enough that you could script that via wget), but if you don't, core Mercurial doesn't have a lot of tools to help you, and the developers have expressed an opposition to trying to turn Mercurial into a general rsync-type client.
If you aren't afraid of playing with unofficial add-ons, you could have a look at the FTP Extension. That will force you to push from the server, however.
Can anybody tell me how to set up a mirror of a mercurial repository? I have a mercurial repo on my laptop, but want to auto mirror the repo on a NAS drive as a form of backup. Ideally, it would be cool if the solution checks a known location for a repo, and if one doesn't exist, create it, and from then on mirror any changes.
Another thing to bear in mind is that the NAS may not always be available, so I would need to accomodate this in some way.
I did something similar with git, but all the functionality should be in mercurial too.
I created manually a clone on some server (in my case a VPS somewhere on the net in case my house burns down with NAS and laptops in it).
With git you can create a "naked" repository, i.e. w/o a branch checked out.
Then I regularly push to it.
This can be automated using 'hooks', more info here .
The trick is to get the handling off the commit hook (oun intended) and that the syncing is not in your workflow. Run your push script using the 'at' command in a couple of minutes time. Then it runs asynchronously in the background. I would not be fancy here, try and handle failures gracefully.
You now have a setup which will keep the backup synched within a couple of minutes.
Mercurial gives you the freedom to do that however you would like. If you wanted, you could just setup a process to copy the repo from your local machine to the NAS at a regular interval. Everything about the repo is stored in the directory, and everything in the directory is just a file.
However, it sounds to me like you want to setup something more akin to a version control system like Subversion. I do something like this with one of my projects (actually, I moved it from SVN to Mercurial, but that's a different answer).
I have a repository on xp-dev.com and my local repository on my computer. I do all of the work on my local repository I want to do, issuing hg com very frequently. When I am done for the day/night I do a hg push ssh://hg2.xp-dev.com/myrepo to send all of my local changes to the remote server.
So, really all you want to do is an hg push to put your local repo on your NAS and then remember to do it again on a regular basis.
I wonder, if it's possible to create and serve to the clients Mercurial repository on the
some FTP folder with RW access . Did someone do a thing like that ?
Thank you in advance.
Just for the sake of completeness, because I had the same problem and feel that there is another, much simpler solution:
Mercurial cloning on local folders "just works", so if you mounted the ftp as a local folder or drive, you could just push/pull/clone to that (and have your repository end up on the ftp).
On Windows, you can e.g. use FTPUse or NetDrive to have your FTP folder mounted as a local drive, the former is free but a CLI tool which removes the virtual drives if the program is closed, the latter has a GUI but is only free for personal use and doesn't work (yet) on Win8. I don't have a Linux machine at hand now, but you should be able to achieve the same using ftpfs.
Once you did it (and your ftp server is now mapped e.g. to f:), you can simply use that virtual drive (or any subfolder) as a remote target for your mercurial operations. Works like a charm for me.
All things are possible. But that would be hard.
The bit where the network transport matters is when cloning a repository, and the standard ways of doing that depend on either serving over HTTP, or having SSH access to the repository host. There's no FTP-based transport for cloning as far as I can see.
If that's the only sharing mechanism you have available, then you could probably work something out using Mercurial bundles. The procedure would be something like the following:
Commit your edits to a local repository
Make a bundle using hg bundle --all my-bundle.hg
FTP my-bundle.hg to the server
The other users of the repository can then use FTP to retrieve the my-bundle.hg file to their local machine, go to their local copy of the repository, and then hg pull my-bundle.hg to pull in any revisions which are in the bundle but not in the local repository. When they want to share their changes, they make a fresh bundle as above, and push that back to the server. The --all option puts all of the changesets into the bundle file -- you can be cleverer and only export 'recent' changes, but that gets a little more complicated and risks losing changesets: using --all is brutal but fail-safe.
There's obviously a fair amount of scope for confusion here, and race conditions (timestamped filenames might help), and hair-pulling-out, and your users would doubtless appreciate some scripts to make this easier, but if all you've got available is an FTP server, you don't have very many options.
Good luck.
This question on SuperUser might be interesting. The core idea seem to evolve around running a background process that synchronizes a local folder with a remote ftp folder. Which might of use to you.
But I dont know what happens when more than one user tries synchronize at the same time. Since using this approach bypasses all the protection that mercurial has regarding locking the tree and such.