It is possible to retrieve the local branches from a local repository with hg branches. Is it possible to do this also with a remote repository programatically?
Unfortunately, there is no way to determine the branches in a remote Mercurial repository without pulling in the repository. You can avoid saving data on disk, by getting the information you desire by using hg incoming, but that command works by pulling the entire repository data anyway--likely not what you want. Unfortunately, your best bet is probably going to be simply to perform a check-out, and then query your now-local repository.
If that's truly unacceptable, you have two additional solutions: you can screen-scape the Bitbucket page for your repository, using a tool like BeautifulSoup or lxml, or you can wait until Bitbucket releases their API, which will likely provide this functionality.
Use bitbucket API
curl http://api.bitbucket.org/1.0/repositories/:username/:repo_slug/branches/
Read more here: http://api.bitbucket.org/1.0/doc/repositories/
Related
I have a remote hg repository hosted on googlecode. Thus I don't have admin access to run e.g. lfconvert on it (as far as I know), and of course lfconvert can only be used on local repositories.
So, is there any way to a convert an googlecode hg repository to a largefile repository?
(one idea is to convert a local clone of the repo to a largefile repo and then push the changes to the "central" googlecode repo, but I fear trying that without knowing if it is a valid approach).
Using your idea to do a local conversion and push, you can take advantage of the 'reset' feature for your repositories:
Do a local clone.
Convert to largefiles: `hg lfconvert normal_repo largefiles_repo``. Do NOT delete the original clone until you are sure everything works.
Reset the hosted repository (See https://code.google.com/p/support/wiki/MercurialFAQ#Mercurial_FAQ).
Push the largefiles repository.
Pushing the largefiles repository without reseting seems problematic because the largefiles repository is essentially a fork of the original one starting at the point the first largefile was committed.
If the push fails*, you can push the original clone and you'll be back where you started without any data loss. (One of the many advantages of DVCS. :-))
The big downside of course is that everybody who has ever cloned your project will now be working from a different fork of the repository. This is always a danger when you do anything involving changing history and is the motivation for Mercurial phases. If you want to be 'kinder', you can start a second project for the largefiles version and place a link at the original project cite describing the move.
[*] I can't figure out from Google Code's documentation whether the largefiles extension is supported. There is a reviewed feature request, but I couldn't find any mention of the request actually being implemented. The push failing would probably be a good indication that largefiles isn't supported though...
I have a workflow where I need to allow users to be able to pull new changes from the Apache hosted mercurial repository but prevent them from doing a fresh clone.
Any ideas on how this can be done?
Thanks
Using hgweb.wsgi to serve the repository using an apache vhost (workarounds accepted)
A clone is just an init followed by a pull, so you can't stop cloning w/o also breaking pull.
The easiest way would be to just publish bundles via regular HTTP and allow users to download and apply those. See hg help bundle:
Generate a compressed changegroup file collecting changesets not known to
be in another repository.
The bundle file can then be transferred using conventional means and
applied to another repository with the unbundle or pull command. This is
useful when direct push and pull are not available or when exporting an
entire repository is undesirable.
Applying bundles preserves all changeset contents including permissions,
copy/rename information, and revision history.
I'm new to Perforce and Mercurial, so bear with me. I would like to use Mercurial to interface with Perforce in the following way:
I check-out a local Perforce workspace using the P4V client. I then clone a Mercurial repo of that workspace, and use this cloned repo for all my work. When I need updated files, I would first update the local Perforce workspace, and then have the Mercurial repo pull from that. When I'm ready to commit, I push my changes to the local Perforce workspace. Then I use the P4V client to commit my changes in the Perforce workspace to the Perforce depot. Essentially, the local Perforce workspace is a proxy for the Perforce repot.
The reason behind this set-up (versus the common scenario of directly pulling from and pushing to the Perforce repot) is that there is some configuration I need to do via the P4V client (such as mapping/renaming files and directories).
I've looked at the convert and perfarce extensions, but I'm not quite sure they do what I want. They seem to do a one-time conversion, and then thereafter they talk directly to the Perforce repot. Any help would be appreciated.
Convert does an incremental conversion, where it will convert only new changes, but it's unidirectional only (perforce -> mercurial).
I've not looked at the perfarce extension, but it's my understanding that's it's built for a bi-directional, continuous process -- you might want to look at it again.
Alternately, the non-extension options on the Working with Subversion page in the mercurial wiki, details a process for using Mercurial alongside/atop Subversion w/o them interacting in any way except for the file working directory. That's probably very similar to what you're looking to do.
The Perfarce extension should do what you want. I'm also experimenting with a similar setup, and I can pull & push to Perforce quite happily.
I must admit I am having issues with local config files and how they operate in this environment, but there's a couple of other answers here on SO that appear to address this.
I would recommend you give Perfarce a go first, before reverting to anything more manual.
Is there a tool out there (preferably web-based) which would automatically detect commits to a BitBucket repository, and at that time, copy all files in the repository to a web-server via FTP?
I basically want a quick and painless way (if one exists) to set up continuous integration between my BitBucket repository and my website.
No build/compilation step would be necessary, since these are only front-end (HTML/CSS/Javascript) files.
The changegroup hook is the way to do this. See Hooks for info about what to do with it.
I've used changegroup hooks on my own hg repositories, but not in BitBucket; it's possible that the BitBucket servers are restricted in what you can do, I'm not sure. I do know a wget/curl attempt to rebuild a manual upon my server upon updating its contents in a repository on SourceForge failed for me because they've locked up their servers too tightly (sending an email from the hook would work but not http access). I would expect BitBucket to be set up better; a quick search for "bitbucket changegroup hook" doesn't seem to indicate that there are any problems with it. Try it and see!
I want to do the equivalent of svn export REMOTE_URL with a mercurial repository. What I want at the end is an unversioned snapshot of the repository at the remote URL, but without cloning all of the changesets over to my local machine.
Also, I want to be able to specify a tag in the remote repository to pick this from. If it's not obvious, I'm building a release management tool that pulls from a canonical mercurial repository to build a release file, and it's slow right now because some projects have large, multiple-version binary files committed.
Is this possible? How would one go about it?
Its usually easier (if the remote HG is using the hgweb interface) to just visit the repo in your browser and download a .tgz / .zip / .bz2 of the tip revision. You'll see the links if the remote HG supports this.
If you want the repository, you need all of the revisions that went into the current tip for it to be at all functional.
There are options to hg clone that allow you to fetch a repository up to a certain revision, but none (that I could find) that allow you to get just the tip revision. What you are essentially asking for is a snapshot of the repo.
Edit: To Get A Snapshot
hg clone http[s]://url.to.repo repo.hg
cd repo.hg
hg archive ../repo-snapshot
cd ..
rm -rf repo.hg
The snapshot is now in repo-snapshot.
Yes, this does entail cloning the repo first, which is why I suggested seeing if the remote hgweb supports on the fly downloads of any particular revision. If it does, your problem is solved with something like curl or wget instead of HG.
If not, its good to let the original repo 'live' since you can update it again later via hg pull, then create another snapshot of a future release. This saves having to start over from scratch when cloning, especially for large repositories with lots of changes.
Also, Linux centric, but you get the gist. Of course, replace http[s] with the desired protocol as needed.
Is there any reason you can't maintain a mirror (updated in the background however often you want) of the remote repository on your local machine, then have the release management tool on your local machine run hg archive out of the local clone as necessary? If your concern is user-responsiveness, and not total bandwidth/storage consumed, this offsets the "slow" part to where you won't see it.
Tim Post noted that if you do have the hgweb CGI interface available, you can configure it to pull compressed archives down and unpack them (and the interface is consistent enough that you could script that via wget), but if you don't, core Mercurial doesn't have a lot of tools to help you, and the developers have expressed an opposition to trying to turn Mercurial into a general rsync-type client.
If you aren't afraid of playing with unofficial add-ons, you could have a look at the FTP Extension. That will force you to push from the server, however.