Mercurial pre commit hook to modify settings - mercurial

I'm trying to setup my web-app (browser extension) so that I can seamlessly move between development, test and production. For the most part I've avoided hardcoding any URLs into the app, but there are few places where this isn't possible:
As a browser extension that injects js, I can't use relative paths (or location.host) when inserting an iframe on a page since it'll default to the domain it's being inserted on. (but I infact need this to be localhost for testing and then www.mydomain.com once I push live)
I also sometimes like to test against both the test and live database, but don't want to have to always toggle that flag and risk committing the test db settings.
What's I'd really like to do is a search/replace before I commit (on a couple of files x.php, y.js) to swap out localhost with www.mydomain.com.
Can anyone tell me how to do that with a precommit hook (or any other technique)?
EDIT:
While I have this posted under mercurial hooks -- I'm 100% open to any other method of automating the changes...
I'm be OK with doing the change on the live server after pulling changes as well ..
I guess for the DB changes, I could just remove the config file from my repo and keep a preconfig version on my live server -- but I don't love this because pulling from my repo to a new server / path would pull the entire app without any db settings... (just dumping my ideas here in case it helps someone with a solution)
Thanks!

I'd put something in a file that is not under version control (i.e. add it to your .hgignore). Make something fail obnoxiously if the file is not there (or pick a sane default), and you can now easily switch the hostname in that file (without risking committing it).

Related

Sharing files between Mercurial repositories

There are one or two files, like .hgignore, which I generally want to be the same in each of a bunch of projects.
However, the nature of these files means that I can't simply move them to a common shared project and just make the other projects depend on that project. They have to be inside each project. Symbolic links are not an option either because some of our developers use Windows.
How can I share these files between repositories and have changes propagated across (on my local machine, at least)? I'm using Eclipse.
For your specific case of hgignore you can put an entry like this in each project's .hg/hgrc file:
[ui]
ignore.common = ~/hgignore-common
If you you know your common library will always the in the parent directory, as is often the case with a subrepo setup you could do:
[ui]
ignore.common = ../hgignore-common
or if you know it will always be in a sibling directory of project checkouts you could do:
[ui]
ignore.common = ../company-wide-defaults/hgignore-common
Unforunately there's no absolute way to reference a file that's valid everywhere, but you can at least to to a point where on your machine all your checkouts are referencing a common ignore.
Hardlinking instead of copying the relevant files sort of works with Eclipse - although you have to refresh each of the other projects to get it to pick up the change. However, you can configure Eclipse to watch the filesystem and automatically refresh whenever it needs to - I recommend this.
It does not work by default with Emacs, because Emacs breaks hard links (which is normally the right thing to do, but is not what you want in this case). However, you can disable this behaviour for multiply-linked files with (setq backup-by-copying-when-linked t).

Disable file history for a particular set of files in Mercurial

I understand that in mercurial you can never remove a history for a file unless you do something like this. Is there any way to disable history for certain files from ever being created?. If any other repository system is capable of doing that, please put that down as well.
Why would I want that? Well, in our build system, new binaries are constantly being committed which the non-programmers can use to run the program without compiling every time (the compilation is done by the build system). Each time new binaries are committed, the old ones are useless as far as we are concerned. It is unnecessarily taking up space. If the new binary messes up by any chance, we can always revert back to older source and rebuild (assuming there is a way to disable history for specific files).
As you found out, you cannot do what you want directly in Mercurial.
I suggest you put the binaries somewhere else -- a Subversion subrepo would be a good choice. That way you will only download the latest version of each file on the client, but you will have all versions on your server (where it should be easy to add more disk space).

Same tag name from different location leads to problems

I'm using Mercurial and Fabric to deploy my site. I'd never used Fabric before so I copied an example fab file online and then changed the vars to match my own site.
Here are the lines of code:
def prod():
env.hosts = ['kevinburke.webfactional.com']
env.user = 'kevinburke'
def deploy():
require('hosts' , provided_by=[prod])
local ("hg tag --local --force production")
local ("hg push --remotecmd /home/kburke/bin/hg") # this works fine
run ("cd /my/web/directory; hg update -C production")
and this is invoked from the command line as
fab prod deploy
When I was the only person deploying the site, this worked with no problems. But recently I added two committers who are running the same fabfile, and when they try deploying the site, the remote version of the site doesn't update to the latest version - it updates only to the latest version that I tagged as production, not the one that they tagged.
I expect that it would use their "production" tag to update the file. Why does this happen? How can I get the program to behave as I expect in the future?
Thanks, Kevin
You can't publish local tags. This means that either your first step is already performed in the /my/web/directory repo, or that there is already a production called revision there (you can check with hg tags, hg branches and hg bookmarks).
You have several ways to fix your workflow (in order of preference):
use common-prefixed tags to distinguish between different production revisions, like production-23 or production-42, which you can parse on the production box.
Create a production branch, where every revision to deliver is merged into this branch. I recommend this one if you already have experience with branches.
Use the bookmark extension, and create a production bookmark to keep track of your deployed version. This looks like the solution you currently want to establish. When you want to use bookmarks, you need to enable them both on the server and all clients, and use hg push -B production to push the current state of your bookmark to the server. One disadvantage of this process is that you will never see if anyone had pushed an other bookmark to the server, since the transmission of the bookmark silently rewrites the bookmark on the server.
Use regular tags to keep track of the production version. On the one hand it seems to be wrong to use tags for this kind of tracking, since tags are meant to be static. On the other hand you would have a track about which revisions where alive on some point in time. But the first solution does the tracking job much better.
May be basic, but did you see that it did indeed do anything? It might be that nothing happened and the deployed was your "production" tag when you had run it.
Since hg tag --local means the tag is only for your local repo and is not versioned, I cannot think of any other reason. Others won't even be able to know about the tag.

How to prevent Mercurial commits/pushes of certain files?

At a point in our development process we send all *.resx files to a translator. The translator usually takes a week to send back the files. During this time no one is allowed to add, remove or update any resx file.
How can I configure mercurial to enforce that policy?
Our setup: Each dev works with a local clone of our central repository.
Nice to have:
I'll turn the "policy" on and off every few weeks. So ideally, I'd like something that is easy to configure at one place and that affect all devs.
I'd rather enforce that policy at the local repository level then at the central repository level because if we prevent the "push" on the central repository, it will be harder for the dev to undo the already locally committed changeset.
Thanks
UPDATE:
More info on the translation process:
Merging is not an issue here. The translator does not change the files that we sent to him. We send him a bunch of language neutral .resx (form1.resx) and returns a bunch of language specific resx (form1.FR.resx).
Why prevent adding new resx? Adding a resx occurs when we add a new UI to our application. If we do that after the translation package has been sent, the translator won't know about the new UI and we'll end up with a new UI with no translation.
Why prevent updating resx? If the dev changes a label value from "open" to "close", he has made a very important semantic change. If he does that after the translation package has been sent, we won't get the right translation back.
You cannot stop by people from committing changes to .resx files unless you have control over their desktop machines (using a pretxncommit hook), and even then it's easily bypassed. It's much more normal to put the check on the central server at time of push using a pretxnchangegroup hook, but you're right that they'll have to fix up any changesets and re-push, which is advanced usage. In either case you'd used the AclExtension to enforce the actual restriction.
Here are two alternate ways to go about this that might work out better for you:
Clone your repository at the start of the translation process, warn developers to leave .resx alone for awhile, apply the work of the translators when they're done, and then merge those changes back into the main development repository with a merge command that always gives the incoming changes priority: X . Then use a simple hg log command to find all the changes in .resx that just got overwritten and tell the developers to re-add them. Chide them at this time.
alternately
Make the .resx files a Subrepository of the larger outer repository. Then turn off write access to that resx repository during the forbidden period. Developers will be able to commit in the outer repository but not the inner one, but clones will still get both exactly as they always did.
For what it's worth, everyone else handles this problem with simple merging, .resx is (XML) text, and it merges just fine.
When working with a DVCS it's not always easy to exactly mirror your svn experience, but there's usually a better option anyway.
You could add *.resx to the hgignore file

Should I have to merge and commit every time I update my Mercurial branch on the production server?

I'm using Mercurial in a recent project. On the web server where I'm deploying the project I have a slightly different config file with production settings. The problem is when I pull and update, I often have to merge and commit as well.
Is this the correct workflow? It seems strange that in order to be able to continue to update I have to be committing the changesets, I figured a merge would integrate them into my production branch and continue to do so each time i update. Is this a distributed version control paradigm I'm just not used to yet?
One option is to keep the server-specific deployment settings out of the version control repository entirely.
This means uploading them and changing them by hand on the server, but eliminates the need to constantly merge. It also keeps things like database passwords out of version control, which is probably a good thing.
For example, when I work on a Django application I check in a settings.py file that contains:
All the settings that won't vary between servers (site name, installed Django apps, etc).
"Server-specific" settings (database location, etc) for local development.
The line from deploy import * at the end.
The from deploy import * line pulls in all items in the deploy.py file if one exists. On a test/staging/production server I'll create this file and put the server-specific settings inside. Because the import happens at the end of settings.py these will overwrite any of the local-development-specific settings in the main settings file.
Doing it this way means that everything needed to run and develop locally is checked into version control, but no server-specific and/or sensitive information (like passwords) are checked in (and so never needs to be merged). It requires a little bit of extra work to set up (adding the import line and creating the deploy.py file on the server initially).
This particular scheme is for a Django project, but maybe a similar idea would work for you.
This was sort of handled in this question, but I think your question is better in that it seeks a little more clarity.
In short: Yes, it's normal. Here's a bit of an expanation:
You start out with this in the main repository (where the boxes are changesets):
main: --[E]--[F]--[G]
then you clone to the production server and add a changeset, H, that does the deployment customization. So the deployment repo looks like this:
production: --[E]--[F]--[G]--[H]
and then more work happens on the main repo, adding changesets, I and J, making the main repo look like:
main: --[E]--[F]--[G]--[I]--[J]
which when pulled to production looks like:
production: --[E]--[F]--[G]--[I]--[J]
\
\-[H]
with two heads, which you merge to get:
production: --[E]--[F]--[G]--[I]--[J]
\ \
\-[H]-----[K]
where K is just J plus the changes you originally did in H.
Now more work happens in main, giving:
main: --[E]--[F]--[G]--[I]--[J]--[L]--[M]
which you pull in production giving:
production: --[E]--[F]--[G]--[I]--[J]--[L]--[M]
\ \
\-[H]-----[K]
and then you merge and get:
production: --[E]--[F]--[G]--[I]--[J]--[L]--[M]
\ \ \
\-[H]-----[K]-------[N]
So every time you bring changes in from main, you're doing one merge, and creating one new changeset (this time N).
I think that's okay, and it is "normal".
You can, however, avoid it by using some of the answers in the question I linked to above and there's a new trick you can use to keep modifying the original H's parents (and content) so that it always moves to the end of whatever is the new tip.
The trick is the Rebase Extension and it would yield linear history on production (though you'd still be doing what's essentially a merge to get it). I'm not a fan of it because I don't like changing changesets after they're committed, but since H is never leaving the production box it's okay.
The other answers were mercurial queues and making the production changes live in the dev repo and get triggered by something that's different in the production environment (like the Host: header).