Our company policy is not to back up hidden folders.
Is it possible to change the .hg folder name to something visible?
There's no way to rename that directory using standard mercurial configuration options. If you're on unix, and I'm guessing your are if .hg sounds hidden, you could use a pre-backup script (or cron job) to snapshot it using cp -al into something with a different name. Using -l gets you hardlinks, so it won't actually take up extra disk.
However, most people back up their .hg repositories with a push to a different mercurial server, which can be easily scripted too.
Can you create a tar archive of the repository before your company's backup cycle runs via cron?
You can always try to fool the back up system by creating a link to the .hg folder with a "backupable" name.
Related
I would like to have a central repository in a directory on my local computer without setting up a server.
Context: I am working with my boss on a local server inside our LAN. We both are using a vnc connection onto the server, and are doing our work there for simplicity reasons. I would like to set it up so that I can have a copy of my scripts for development, and then when I get to a release, I will push it to a different directory that my boss can then run them from (or even better pull from to his own set and run from there).
I read that you can create a hg server by running 'hg serve', but I do not want to open it up to the LAN, because I don't want it to be accessible.
I tried running 'hg push /home/source' and it gave me an error.
I then ran 'hg init' while in that directory and tried again. It looked like it worked, and then didn't show any files in the directory. I ran status and it showed nothing, and then ran log and it showed the commits.
... without setting up a server
One way I've used to share a "central" Mercurial repository without having to deal with any "server" issues is to have the "central" repository in a folder on Dropbox.
For example, suppose:
your repository is named "repo" and that your "private" copy is in ~/repo
your Dropbox directory on your computer is ~/Dropbox/
Then:
cd ~/Dropbox
hg clone ~/repo
Now suppose you make some changes in ~/repo. You can then "push" them from ~/repo to ~/Dropbox/repo, or (more easily, as explained below) "pull" them into ~/Dropbox/repo when you're ready.
To make updating the "central" repository convenient, you might like to create a script such as:
#!/bin/bash
cd ~/Dropbox/repo
hg tip
hg pull -u
hg tip
Notice that in the script, there is no need to specify the source from which to pull; the hgrc file that's created when you created the clone keeps track of that. (Thank you, hg.)
If your colleague has direct access to a folder on your computer, then you could still adopt the strategy described above, without using Dropbox.
Needless to say, there are many variations.
Needless to say also, if more than one person attempts to commit changes to the shared folder, chaos can easily ensue.
I currently have a project versioned using Mercurial. On my computer, there is a .hg folder in the root of my local repository.
I want to change from Mercurial to Git, so I'm wondering if removing the .hg folder is enough to remove Mercurial versioning from this folder?
If not, what can I do? (I don't want to move the existing sources on my computer).
Yes, all the bits that make it a Mercurial repository are in the .hg folder so you can delete that to remove the Mercurial versioning.
Note though that doing this will obviously lose all your source control history as well.
Looks like there are some options to convert the repository if you want to keep that history, first hit on google:
http://arr.gr/blog/2011/10/bitbucket-converting-hg-repositories-to-git/
yes that should work.
mercurial stores chancesets and so on in the .hg folder, but you will lose all your projects history if you just delete the .hg folder and use git instead then.
Getting ready to launch a website/project that was in beta testing. I want to switch it over to version control (Mercurial since I'm familiar with it).
Problem is, I am not sure how to go about doing it since the code on the website is already up and in-use and how to deal with the directories I do not need to manage (vendor and web/Upload).
Whats the best way to go about this?
Would I put the entire site into a folder, init a Merc repo, use hgignore to not track vendor and web/Upload, commit, then clone it to the live server?
Thanks! Just confused on what to do since the site is live and has user uploads.
I'm assuming you want to turn the website directory on your web server into a Mercurial repository. If that's the case, you would create a new repository somewhere on that computer, then move the .hg directory in the new repository into the website directory you want to be the root of the repository. You should then be able to run
hg add * --exclude vendor --exclude web/Upload
hg commit -m "Adding site to version control."
to get all the non-user files into version control.
I recommend, however, that you write a script or investigate tools that will deploy your website out of a repository outside your web root. You don't want your .hg directory exposed to the world. Until you get a deploy script/tool working, make sure you tell your webserver to prohibit/reject all requests to your .hg directory.
Can i shelve some code I've been working on, at work, with TortoiseHG .. go home .. pull/merge/update ... and then UnShelve and continue working at home?
Does TortoiseHG offer this?
At work, I created a new shelve and added all my 'touched' files into the shelve. But when I got home I couldn't find/see the shelve, etc.
The shelf is just a file on the local copy of the repository, so if you are working from another computer you won't see the shelf.
Note: TortoiseHg's implementation is just to create a diff in the file .hg\shelve, so potentially you could email the file home and place it in the .hg folder (being careful not to destroy an existing shelf of course!)
MQ with pull|push including mq-patches maybe more natural way
In Windows, you can automatically sync shelves using a cloud storage sync service like DropBox or Google Drive. Move the shelves directory (in .hg) to your cloud storage folder and replace it with a directory junction. You can create a directory junction by running this in the .hg directory:
mklink /h shelves C:/Users/<username>/Google Drive/shelves
Of course, replace the target with whatever location you are actually using. Repeat this on all computer you are using Mercurial on.
You can put the files you are working on in DropBox (or similar) shared folder.
In this way you will always have synchronized copy of your file on several computers.
Maybe this is not the cleanest solution but it works.
I'm starting to use Mercurial on my web server (in this case MediaTemple's Grid). I've used SVN previously, though I'm not an expert of version control systems. I'm just needing a little help with clearing up some confusion with getting it set up optimally.
I have a 'data' folder which is outside the web server root and that the browser cannot access. It was recommended to me before to have my Mercurial repositories setup here, then I would clone from here locally on my computer. I would also have a 'domains' folder that is basically the web server root and inside there is my actual domains where my websites are actually served to the browser - these would need to be updated from the 'data' repositories too.
But with this in mind, after setting it up, it seems inefficient... I'm cloning to my local (that makes sense), adding, committing, pushing. That's fine... But then I'm then updating in my data repository folder and then updating in my domains folder to actually update my websites.
Surely, I don't actually need this 'data' folder for repositories? Wouldn't my actual live 'domains' folders be the main repositories themselves? So I'm cloning locally and updating from these? Please help me clear some confusions with all this (if you can).
It's strictly a matter of personal preference. Some folks make their live websites also the "master" repo, and some make it a clone of an elsewhere located repo. What you're doing right is serving your sites from directory in the repo, that's a good choice.
Some considerations as to whether you might want separate 'data' clones independent from the web root clones are:
do you want to have multiple heads in the same branch which might confuse the person updating the main repo?
do you want a repo to which people you don't trust with editing the live website can push so that a trusted admin (you?) does the push/pull from data to webroot?
One thing to note is that in the 'data' repo you can do hg update -r null which gets rid of the working copy (but keeps the repo!), so that the diskspace used is almost zero (assuming it's a clone of the webroot they'll share the same underlying files at the FS hardlink level).
I do have a repos (data) folder outside the website root, containing various repositories, and served through hgwebdir on a separate domain (hg.mywebsite.com).
However, my website’s repository I do store in the httpdocs directory of the main domain. I test on my local environment and then pushing my changes to the server will also publish them.
To achieve this I have this in my hgweb.config:
private/mywebsite = ../../../httpdocs
And this in that repository’s hgrc:
[hooks]
changegroup.update = hg update
This hook will update the working directory to the tip whenever changes are pushed. Of course I have also added a rule to the Apache configuration to ignore the .hg directory, and on the subdomain hg runs on, a rule to require authorisation for accesses to the private/ paths.
An alternative would be to instead host the repository together with the others, and then ‘hg archive’ into the httpdocs directory. A little more secure, a little slower, and as for convenience I would say it’s 50-50.
p.s. also adding a hook to forbid creation of remote branches may be a good idea, if people who might do push -f can access your repositories.