I have been using TortoiseSVN for some time and I really like it. I was told TortoiseHg was worth checking out and "much better than SVN" so I am trying to get started with it.
I created a google code repository and I managed to upload one folder to it. However, when uploading another folder, I keep getting the error "abort: repository is unrelated"
TortoiseHg and TortoiseSVN are not really comparable critters, as they are just GUI front-ends for two completely different version control systems. SVN is an older centralized stalwart, and Hg is a newer distributed system. Whether you should use SVN or Hg really depends on your use case, specifically:
if your developers will ever be cut off from the central repository. Using Google Code would indicate 'no' since all developers need is a network connection, but off-network developers would still be a problem. Being a DVCS, Mercurial would allow off-network developers to work anywhere, only having to be on network to deal with server push/pull situations.
if your developers care about file locking. SVN has built-in file locking and Mercurial does not.
if your developers struggle with merging. Mercurial has a more developed merge capability and handles conflicts more intelligently.
If you end up with three no's, then SVN may be the one for you.
Related
I'm new to Perforce and Mercurial, so bear with me. I would like to use Mercurial to interface with Perforce in the following way:
I check-out a local Perforce workspace using the P4V client. I then clone a Mercurial repo of that workspace, and use this cloned repo for all my work. When I need updated files, I would first update the local Perforce workspace, and then have the Mercurial repo pull from that. When I'm ready to commit, I push my changes to the local Perforce workspace. Then I use the P4V client to commit my changes in the Perforce workspace to the Perforce depot. Essentially, the local Perforce workspace is a proxy for the Perforce repot.
The reason behind this set-up (versus the common scenario of directly pulling from and pushing to the Perforce repot) is that there is some configuration I need to do via the P4V client (such as mapping/renaming files and directories).
I've looked at the convert and perfarce extensions, but I'm not quite sure they do what I want. They seem to do a one-time conversion, and then thereafter they talk directly to the Perforce repot. Any help would be appreciated.
Convert does an incremental conversion, where it will convert only new changes, but it's unidirectional only (perforce -> mercurial).
I've not looked at the perfarce extension, but it's my understanding that's it's built for a bi-directional, continuous process -- you might want to look at it again.
Alternately, the non-extension options on the Working with Subversion page in the mercurial wiki, details a process for using Mercurial alongside/atop Subversion w/o them interacting in any way except for the file working directory. That's probably very similar to what you're looking to do.
The Perfarce extension should do what you want. I'm also experimenting with a similar setup, and I can pull & push to Perforce quite happily.
I must admit I am having issues with local config files and how they operate in this environment, but there's a couple of other answers here on SO that appear to address this.
I would recommend you give Perfarce a go first, before reverting to anything more manual.
Is there a tool out there (preferably web-based) which would automatically detect commits to a BitBucket repository, and at that time, copy all files in the repository to a web-server via FTP?
I basically want a quick and painless way (if one exists) to set up continuous integration between my BitBucket repository and my website.
No build/compilation step would be necessary, since these are only front-end (HTML/CSS/Javascript) files.
The changegroup hook is the way to do this. See Hooks for info about what to do with it.
I've used changegroup hooks on my own hg repositories, but not in BitBucket; it's possible that the BitBucket servers are restricted in what you can do, I'm not sure. I do know a wget/curl attempt to rebuild a manual upon my server upon updating its contents in a repository on SourceForge failed for me because they've locked up their servers too tightly (sending an email from the hook would work but not http access). I would expect BitBucket to be set up better; a quick search for "bitbucket changegroup hook" doesn't seem to indicate that there are any problems with it. Try it and see!
I want to do the equivalent of svn export REMOTE_URL with a mercurial repository. What I want at the end is an unversioned snapshot of the repository at the remote URL, but without cloning all of the changesets over to my local machine.
Also, I want to be able to specify a tag in the remote repository to pick this from. If it's not obvious, I'm building a release management tool that pulls from a canonical mercurial repository to build a release file, and it's slow right now because some projects have large, multiple-version binary files committed.
Is this possible? How would one go about it?
Its usually easier (if the remote HG is using the hgweb interface) to just visit the repo in your browser and download a .tgz / .zip / .bz2 of the tip revision. You'll see the links if the remote HG supports this.
If you want the repository, you need all of the revisions that went into the current tip for it to be at all functional.
There are options to hg clone that allow you to fetch a repository up to a certain revision, but none (that I could find) that allow you to get just the tip revision. What you are essentially asking for is a snapshot of the repo.
Edit: To Get A Snapshot
hg clone http[s]://url.to.repo repo.hg
cd repo.hg
hg archive ../repo-snapshot
cd ..
rm -rf repo.hg
The snapshot is now in repo-snapshot.
Yes, this does entail cloning the repo first, which is why I suggested seeing if the remote hgweb supports on the fly downloads of any particular revision. If it does, your problem is solved with something like curl or wget instead of HG.
If not, its good to let the original repo 'live' since you can update it again later via hg pull, then create another snapshot of a future release. This saves having to start over from scratch when cloning, especially for large repositories with lots of changes.
Also, Linux centric, but you get the gist. Of course, replace http[s] with the desired protocol as needed.
Is there any reason you can't maintain a mirror (updated in the background however often you want) of the remote repository on your local machine, then have the release management tool on your local machine run hg archive out of the local clone as necessary? If your concern is user-responsiveness, and not total bandwidth/storage consumed, this offsets the "slow" part to where you won't see it.
Tim Post noted that if you do have the hgweb CGI interface available, you can configure it to pull compressed archives down and unpack them (and the interface is consistent enough that you could script that via wget), but if you don't, core Mercurial doesn't have a lot of tools to help you, and the developers have expressed an opposition to trying to turn Mercurial into a general rsync-type client.
If you aren't afraid of playing with unofficial add-ons, you could have a look at the FTP Extension. That will force you to push from the server, however.
I wonder, if it's possible to create and serve to the clients Mercurial repository on the
some FTP folder with RW access . Did someone do a thing like that ?
Thank you in advance.
Just for the sake of completeness, because I had the same problem and feel that there is another, much simpler solution:
Mercurial cloning on local folders "just works", so if you mounted the ftp as a local folder or drive, you could just push/pull/clone to that (and have your repository end up on the ftp).
On Windows, you can e.g. use FTPUse or NetDrive to have your FTP folder mounted as a local drive, the former is free but a CLI tool which removes the virtual drives if the program is closed, the latter has a GUI but is only free for personal use and doesn't work (yet) on Win8. I don't have a Linux machine at hand now, but you should be able to achieve the same using ftpfs.
Once you did it (and your ftp server is now mapped e.g. to f:), you can simply use that virtual drive (or any subfolder) as a remote target for your mercurial operations. Works like a charm for me.
All things are possible. But that would be hard.
The bit where the network transport matters is when cloning a repository, and the standard ways of doing that depend on either serving over HTTP, or having SSH access to the repository host. There's no FTP-based transport for cloning as far as I can see.
If that's the only sharing mechanism you have available, then you could probably work something out using Mercurial bundles. The procedure would be something like the following:
Commit your edits to a local repository
Make a bundle using hg bundle --all my-bundle.hg
FTP my-bundle.hg to the server
The other users of the repository can then use FTP to retrieve the my-bundle.hg file to their local machine, go to their local copy of the repository, and then hg pull my-bundle.hg to pull in any revisions which are in the bundle but not in the local repository. When they want to share their changes, they make a fresh bundle as above, and push that back to the server. The --all option puts all of the changesets into the bundle file -- you can be cleverer and only export 'recent' changes, but that gets a little more complicated and risks losing changesets: using --all is brutal but fail-safe.
There's obviously a fair amount of scope for confusion here, and race conditions (timestamped filenames might help), and hair-pulling-out, and your users would doubtless appreciate some scripts to make this easier, but if all you've got available is an FTP server, you don't have very many options.
Good luck.
This question on SuperUser might be interesting. The core idea seem to evolve around running a background process that synchronizes a local folder with a remote ftp folder. Which might of use to you.
But I dont know what happens when more than one user tries synchronize at the same time. Since using this approach bypasses all the protection that mercurial has regarding locking the tree and such.
We come from a subversion background where we have a QA manager who gives commit rights to the central repository once he has verified that all QC activities have been done.
Me and a couple of colleagues are starting to use mercurial, and we want to have a shared repository that would contain our QC-ed changes. Each of the developers hg clones the repository and pushes his changes back to the shared repository. I've read the HG init tutorial and skimmed through the red bean book, but could not find how to control who is allowed to push changes to the shared repository.
How would our existing model of QA-manager controlled commits translate to a mercurial 'central' repository?
HenriW's comment asking how you are serving up the repositories is exactly the right question. How you set up authentication depends entirely on how you're serving your repo (HTTP via Apache, HTTP via hg-serve,, ssh, etc.). The transport mechanism provides the authentication and then mercurial uses that with the commands from Mr. Cat's link (useless in and of themselves) to handle access control.
Since you didn't mention how you're serving the repo, it was probably someting easy to set up (you'd have remembered to mention the hassle fo an apache or ssh setup :). So I'll ugess those two:
If you're using hg serve then you don't have authentication setup. You need to use apache, lighttp, or nginx in front of hgweb or hgwebdir to provide authentication. Until you do the allow_* and deny_* options are strictly everyone or no one.
If you're using ssh then you're already getting your authentication fromm ssh (and probably your OS), so you can use the allow_* and deny_* directives (and file system access controls if you'd like).
serverfault.com has a relevant question and links to the Publishing Repositories Mercurial Wiki page. The first shows how to configure per-repository access when using hgweb on the server. I get a feeling that you're using ssh which the wiki page labels as "private" and am therefore inclined to believe you would have to fall back to file-system access control, i.e. make all the files in the repository belong to the group "commiters", give group members write access and everyone else read/only.