I'm starting to use Mercurial on my web server (in this case MediaTemple's Grid). I've used SVN previously, though I'm not an expert of version control systems. I'm just needing a little help with clearing up some confusion with getting it set up optimally.
I have a 'data' folder which is outside the web server root and that the browser cannot access. It was recommended to me before to have my Mercurial repositories setup here, then I would clone from here locally on my computer. I would also have a 'domains' folder that is basically the web server root and inside there is my actual domains where my websites are actually served to the browser - these would need to be updated from the 'data' repositories too.
But with this in mind, after setting it up, it seems inefficient... I'm cloning to my local (that makes sense), adding, committing, pushing. That's fine... But then I'm then updating in my data repository folder and then updating in my domains folder to actually update my websites.
Surely, I don't actually need this 'data' folder for repositories? Wouldn't my actual live 'domains' folders be the main repositories themselves? So I'm cloning locally and updating from these? Please help me clear some confusions with all this (if you can).
It's strictly a matter of personal preference. Some folks make their live websites also the "master" repo, and some make it a clone of an elsewhere located repo. What you're doing right is serving your sites from directory in the repo, that's a good choice.
Some considerations as to whether you might want separate 'data' clones independent from the web root clones are:
do you want to have multiple heads in the same branch which might confuse the person updating the main repo?
do you want a repo to which people you don't trust with editing the live website can push so that a trusted admin (you?) does the push/pull from data to webroot?
One thing to note is that in the 'data' repo you can do hg update -r null which gets rid of the working copy (but keeps the repo!), so that the diskspace used is almost zero (assuming it's a clone of the webroot they'll share the same underlying files at the FS hardlink level).
I do have a repos (data) folder outside the website root, containing various repositories, and served through hgwebdir on a separate domain (hg.mywebsite.com).
However, my website’s repository I do store in the httpdocs directory of the main domain. I test on my local environment and then pushing my changes to the server will also publish them.
To achieve this I have this in my hgweb.config:
private/mywebsite = ../../../httpdocs
And this in that repository’s hgrc:
[hooks]
changegroup.update = hg update
This hook will update the working directory to the tip whenever changes are pushed. Of course I have also added a rule to the Apache configuration to ignore the .hg directory, and on the subdomain hg runs on, a rule to require authorisation for accesses to the private/ paths.
An alternative would be to instead host the repository together with the others, and then ‘hg archive’ into the httpdocs directory. A little more secure, a little slower, and as for convenience I would say it’s 50-50.
p.s. also adding a hook to forbid creation of remote branches may be a good idea, if people who might do push -f can access your repositories.
Related
I would like to have a central repository in a directory on my local computer without setting up a server.
Context: I am working with my boss on a local server inside our LAN. We both are using a vnc connection onto the server, and are doing our work there for simplicity reasons. I would like to set it up so that I can have a copy of my scripts for development, and then when I get to a release, I will push it to a different directory that my boss can then run them from (or even better pull from to his own set and run from there).
I read that you can create a hg server by running 'hg serve', but I do not want to open it up to the LAN, because I don't want it to be accessible.
I tried running 'hg push /home/source' and it gave me an error.
I then ran 'hg init' while in that directory and tried again. It looked like it worked, and then didn't show any files in the directory. I ran status and it showed nothing, and then ran log and it showed the commits.
... without setting up a server
One way I've used to share a "central" Mercurial repository without having to deal with any "server" issues is to have the "central" repository in a folder on Dropbox.
For example, suppose:
your repository is named "repo" and that your "private" copy is in ~/repo
your Dropbox directory on your computer is ~/Dropbox/
Then:
cd ~/Dropbox
hg clone ~/repo
Now suppose you make some changes in ~/repo. You can then "push" them from ~/repo to ~/Dropbox/repo, or (more easily, as explained below) "pull" them into ~/Dropbox/repo when you're ready.
To make updating the "central" repository convenient, you might like to create a script such as:
#!/bin/bash
cd ~/Dropbox/repo
hg tip
hg pull -u
hg tip
Notice that in the script, there is no need to specify the source from which to pull; the hgrc file that's created when you created the clone keeps track of that. (Thank you, hg.)
If your colleague has direct access to a folder on your computer, then you could still adopt the strategy described above, without using Dropbox.
Needless to say, there are many variations.
Needless to say also, if more than one person attempts to commit changes to the shared folder, chaos can easily ensue.
Getting ready to launch a website/project that was in beta testing. I want to switch it over to version control (Mercurial since I'm familiar with it).
Problem is, I am not sure how to go about doing it since the code on the website is already up and in-use and how to deal with the directories I do not need to manage (vendor and web/Upload).
Whats the best way to go about this?
Would I put the entire site into a folder, init a Merc repo, use hgignore to not track vendor and web/Upload, commit, then clone it to the live server?
Thanks! Just confused on what to do since the site is live and has user uploads.
I'm assuming you want to turn the website directory on your web server into a Mercurial repository. If that's the case, you would create a new repository somewhere on that computer, then move the .hg directory in the new repository into the website directory you want to be the root of the repository. You should then be able to run
hg add * --exclude vendor --exclude web/Upload
hg commit -m "Adding site to version control."
to get all the non-user files into version control.
I recommend, however, that you write a script or investigate tools that will deploy your website out of a repository outside your web root. You don't want your .hg directory exposed to the world. Until you get a deploy script/tool working, make sure you tell your webserver to prohibit/reject all requests to your .hg directory.
I'm new mercurial user. I setup the acl extension adding this into my hgrc file:
[hooks]
pretxnchangegroup.acl = python:hgext.acl.hook
[acl]
sources = serve pull push
[acl.deny]
** = mercurial
So with this code above I deny access to all files to user "mercurial". I successfully tested the acl extension and it works perfectly when I try to push to my central repository on which I put the code above. As expected I receive message that the access for the user "mercurial" is denied.
Now the problem is when I'm start pulling from central repository I don't have any restriction so I can pull anything without any restriction. What I want is to deny pull access for some files like I can do when I tried push comand. Is there any way I can do this?
Mercurial, unlike Subversion, doesn't allow controls on individual files, and for good reason. The DVCS model puts the entire repo on every developer's machine, so even if you restrict files on push and pull, the user could still just hg cat the file to get its contents.
Instead of trying to do this on the client side, I would instead break your repos based on who needs what and set permissions to individual repos. See my answer on the Kiln stack exchange Should I use more than one repository?. You can set permissions via http(s) or SSH, or if you happen to be using Kiln, through our permissions interface.
As came out in the comments with tghw, it sounds like what you really want is partial cloning by filepath, so that a person can clone or pull down only certain files or directories, but that's not possible in Mercurial (or git). That's the case because every revision is identified by a unique hash that includes, among other things, the hash of all the file changes. If you don't have all the files, you don't have all the changes, and you can't verify the hash.
If you really need to hide read access for some files from some people you'll need to split them up into separate repositories.
I have a solution:
convert your repo to git:
https://git.wiki.kernel.org/index.php/Interfaces,_frontends,_and_tools#Mercurial
I wonder, if it's possible to create and serve to the clients Mercurial repository on the
some FTP folder with RW access . Did someone do a thing like that ?
Thank you in advance.
Just for the sake of completeness, because I had the same problem and feel that there is another, much simpler solution:
Mercurial cloning on local folders "just works", so if you mounted the ftp as a local folder or drive, you could just push/pull/clone to that (and have your repository end up on the ftp).
On Windows, you can e.g. use FTPUse or NetDrive to have your FTP folder mounted as a local drive, the former is free but a CLI tool which removes the virtual drives if the program is closed, the latter has a GUI but is only free for personal use and doesn't work (yet) on Win8. I don't have a Linux machine at hand now, but you should be able to achieve the same using ftpfs.
Once you did it (and your ftp server is now mapped e.g. to f:), you can simply use that virtual drive (or any subfolder) as a remote target for your mercurial operations. Works like a charm for me.
All things are possible. But that would be hard.
The bit where the network transport matters is when cloning a repository, and the standard ways of doing that depend on either serving over HTTP, or having SSH access to the repository host. There's no FTP-based transport for cloning as far as I can see.
If that's the only sharing mechanism you have available, then you could probably work something out using Mercurial bundles. The procedure would be something like the following:
Commit your edits to a local repository
Make a bundle using hg bundle --all my-bundle.hg
FTP my-bundle.hg to the server
The other users of the repository can then use FTP to retrieve the my-bundle.hg file to their local machine, go to their local copy of the repository, and then hg pull my-bundle.hg to pull in any revisions which are in the bundle but not in the local repository. When they want to share their changes, they make a fresh bundle as above, and push that back to the server. The --all option puts all of the changesets into the bundle file -- you can be cleverer and only export 'recent' changes, but that gets a little more complicated and risks losing changesets: using --all is brutal but fail-safe.
There's obviously a fair amount of scope for confusion here, and race conditions (timestamped filenames might help), and hair-pulling-out, and your users would doubtless appreciate some scripts to make this easier, but if all you've got available is an FTP server, you don't have very many options.
Good luck.
This question on SuperUser might be interesting. The core idea seem to evolve around running a background process that synchronizes a local folder with a remote ftp folder. Which might of use to you.
But I dont know what happens when more than one user tries synchronize at the same time. Since using this approach bypasses all the protection that mercurial has regarding locking the tree and such.
We come from a subversion background where we have a QA manager who gives commit rights to the central repository once he has verified that all QC activities have been done.
Me and a couple of colleagues are starting to use mercurial, and we want to have a shared repository that would contain our QC-ed changes. Each of the developers hg clones the repository and pushes his changes back to the shared repository. I've read the HG init tutorial and skimmed through the red bean book, but could not find how to control who is allowed to push changes to the shared repository.
How would our existing model of QA-manager controlled commits translate to a mercurial 'central' repository?
HenriW's comment asking how you are serving up the repositories is exactly the right question. How you set up authentication depends entirely on how you're serving your repo (HTTP via Apache, HTTP via hg-serve,, ssh, etc.). The transport mechanism provides the authentication and then mercurial uses that with the commands from Mr. Cat's link (useless in and of themselves) to handle access control.
Since you didn't mention how you're serving the repo, it was probably someting easy to set up (you'd have remembered to mention the hassle fo an apache or ssh setup :). So I'll ugess those two:
If you're using hg serve then you don't have authentication setup. You need to use apache, lighttp, or nginx in front of hgweb or hgwebdir to provide authentication. Until you do the allow_* and deny_* options are strictly everyone or no one.
If you're using ssh then you're already getting your authentication fromm ssh (and probably your OS), so you can use the allow_* and deny_* directives (and file system access controls if you'd like).
serverfault.com has a relevant question and links to the Publishing Repositories Mercurial Wiki page. The first shows how to configure per-repository access when using hgweb on the server. I get a feeling that you're using ssh which the wiki page labels as "private" and am therefore inclined to believe you would have to fall back to file-system access control, i.e. make all the files in the repository belong to the group "commiters", give group members write access and everyone else read/only.