We've just recently switched over from SVN to Mercurial, but now we are running into problems with our workflow. Example:
I have my local clone of the repository which I work on. I'm making some highly experimental changes to our code base, something that I don't want to commit before I'm sure it works the way it is supposed to, I don't want to commit it even locally. Now, simultaneously, my co-worker has made some significant improvements/bug fixes which I need. He pushes his commits to our main repository. The question is, how can I merge his changes to my workspace without the requirement that I have to commit all my changes, since I need his changes to test my own code?
A more day-to-day problem we have with the exact same workflow is where we have a couple of configuration files which are in the repository. Each developer makes a couple of small environment specific changes to the configuration files, but do not commit the changes. These couple of uncommitted files hinders us from making any merges to our workspace, just like with the example above. Ideally, the configuration files probably shouldn't be in the repository, unfortunately, that's just how it has to be for here unnamed reasons.
If you don't want to clone, you can do it the following way.
hg diff > mylocalchanges.txt
hg revert -a
# Do your merge here, once you are done, import back your local mods
hg import --no-commit mylocalchanges.txt
There are two operations, as you've discovered, that makes changes from one person available to someone else (or many, on either side.)
There's pulling, which takes changes from some other clone of the repository and puts them into your clone.
There's pushing, which takes changes from your repository and puts them into another clone.
In your case, your coworker has pushed his changes into what I assume is your central master of the repository.
After he has done this, you can pull the latest changes down into your repository, and merge them into your branch. This will incorporate any bugfixes or changes your coworker did into your experimental code.
This gives you the freedom of staying current on other coworkers development in your project, and not having to release your experimental code until it is ready (or even at all.)
So, as long as you stay away from the Push command, you're safe.
Of course, this also assumes nobody is pulling directly from your clone of the repository, if they do that, then of course they will get your experimental changes, but it doesn't sound like you've set it up this way (and it is highly unlikely as well.)
As for the configuration files, the typical way to do this is that you only commit a master file template into the repository, with a different name (ie. an extra extension .template or similar), and then place the name of the real configuration file into the ignore filter.
Each developer then has to make his or her own copy of the template, rename it, and change it in any way they want, without the risk of committing database connection strings, passwords, or local paths, to the repository.
If necessary, provide a script that will help the developer make the real configuration file if it is long and complex.
Regarding your experimental changes, you should commit them. Often.
Simply you commit them in a clone you don't push. You only pull to merge whatever updates you need from other repos.
As for config files, don't commit them.
Commit template files, and script able to generate complete config files from the template.
That way, developers will only modify "private" (i.e. not committed) config files with their own private values.
If you know your uncommitted changes will not collide with the merge commit that you are creating - then you can do the following...
1) Shelve the uncommitted changes
2) Do the pull and merge
3) Unshelve the uncommitted changes
Shelf effectively stores your uncommitted changes away as into diff (relative to your last commit) then rolls back those files in your local workspace. Then un-shelving then applies that diff, bringing back your uncommitted changes.
Tools such as TortoiseHg have shelf built in.
Related
I use an ant script to deploy my application. Before deploying, however, I preserve any uncommitted changes by committing them to another branch, but keeping them uncommitted in my current branch. I do this by shelving the changes, updating to the other branch, unshelving them (but with --keep), committing, updating back to the original branch and unshelving once more (but without --keep).
The problem with this is twofold. Firstly, shelving changes the content of my project which in some cases messes with my IDE. Secondly, when an error occurs within the ant script after the files were shelved, but before they where successfully unshelved, I am forced to unshelve them manually, which is a pain. The same goes for updating to and from the other branch.
Is there a better way of doing this?
-- EDIT --
After posting this, I managed to implement a better, albeit not ideal, solution. I check if there are local, uncommited changes, and if there are, I shelve them, mock the current branch in hg to be the other branch (using debugsetparent), commit any changes in the working directory (this is now effectively a merge with the original branch that accepts all changes from "other", but without updating at any point), unshelve the previously shelved changes, commit them and mock the current branch in hg to be the original branch, leaving the unshelved changes as local, uncommited changes.
This is better in that I don't update to the other branch and any real change that happens, as far as the IDE is concerned, is the shelving/unshelving of local changes. Previously, it would be affected by all the changes between my current branch and the other one, but now this is avoided.
Still it's not an ideal solution. I could avoid shelving altogether, but then the commit in the other branch would contain the local changes bundled up with any changes that resulted from the difference between the two branches, any I don't like that since it defeats the purpose of being able to quickly check the changes that were done between deploys.
To not endanger current work, would be better to have 2 repositories, one for development, and another for deployment.
Clone the main repository locally to another path in your disk, and use it for deployment, the main repository should be used for development.
Whenever you need to deploy a new version, sync with the main repository, and update to the desired commit.
I have four devs working in four separate source folders in a mercurial repo. Why do they have to merge all the time and pollute the repo with merge changesets? It annoys them and it annoys me.
Is there a better way to do this?
Assuming the changes really don't conflict, you can use the rebase extension in lieu of merging.
First, put this in your .hgrc file:
[extensions]
rebase =
Now, instead of merging, just do hg rebase. It will "detach" your local changesets and move them to be descendants of the public tip. You can also pass various arguments to modify what gets rebased.
Again, this is not a good idea if your developers are going to encounter physical merge conflicts, or logical conflicts (e.g. Alice changed a feature in file A at the same time as Bob altered related functionality in file B). In those cases, you should probably use a real merge in order to properly represent the relevant history. hg rebase can be easily aborted if physical conflicts are encountered, but it's a good idea to check for logical conflicts by hand, since the extension cannot detect those automatically.
Your development team are committing little and often; this is just what you want so you don't want to change that habit for the sake of a clean line of commits.
#Kevin has described using the rebase extension and I agree that can work fine. However, you'll also see all the work sequence of each developer squished together in a single line of commits. If you're working on a stable code base and just submitting quick single-commit fixes then that may be fine - if you have ongoing lines of development then you might not won't want to lose the continuity of a developer's commits.
Another option is to split your repository into smaller self-contained repositories.
If your developers are always working in 4 separate folders, perhaps the contents of these folders can be modularised and stored as separate Mercurial repositories. You could then have a separate master repository that brought all these smaller repositories together within the sub-repository framework.
Mercurial is distributed, it means that if you have a central repository, every developer also has a private repository on his/her workstation, and also a working copy of course.
So now let's suppose that they make a change and commit it, i.e., to their private repository. When they want to hg push two things can happen:
either they are the first one to push a new changeset on the central server, then no merge will be required, or
either somebody else, starting from the same version, has committed and pushed before them. We can see that there is a fork here: from the same starting point Mercurial has two different directions, thus a merge is required, even if there is no conflict, because we do not want four different divergent contexts on the central server (which by the way is possible with Mercurial, they are called heads and you can force the push without merge, but you still have the divergence, no magic, and this is probably not what you want because you want to be able to checkout the sum of all the contributions..).
Now how to avoid performing merges is quite simple: you need to tell your developers to integrate others changes before committing their own changes:
$ hg pull
$ hg update
$ hg commit -m"..."
$ hg push
When the commit is made against the latest central version, no merge should be required.
If they where working on the same code, after pull and update some running of tests would be required as well to ensure that what was working in isolation still works when other developers work have been integrated. Taking others contributions frequently and pushing our own changes also frequently is called continuous integration and ensures that integration issues are discovered quickly.
Hope it'll help.
I use Tortoise Mercurial tool to manage my mercurial repository.
And I have a separate .diff file containin0g changes to a file in my repository.
Is there any way how to use that diff to update my file ?
thank you
Most Linux repositories comes with the patch program. You can then execute:
patch original.data difference.diff
Patch will modify the original.data file in such a way that if one would calculate the diff between the final and original state, one gets the same difference.diff again.
.diff files are in general not visible in a subversioning respository. They are stored internally to hide the several commits from the user. Diffs however can be usefull to analyze the difference between two commits. Say for instance somebody worked on your project and made a lot of commits, you might want to inspect what that person actually did without having to read the reports of all commits (since some changes can be undone in the next commit)
In our Mercurial repo we added a really big file (and did an hg push), then deleted the big file (and did another push).
Now if someone does an hg clone will they still pull down that big file? I know it won't appear in their working directory as it was deleted, but will the file still be pulled down and stored in Mercurial internal storage?
I'd like to ensure people don't have to pull down the file. I've learned that really big files should be stored outside of Mercurial, so I deleted the file. But I was wondering if people will still be pulling down the big file - in which case I guess I will recreate the repository from scratch.
Of course it will still be in the repository.
You can always update back to older revisions, and if you update back to the revision you got when you committed the file, it'll be there in all its glory.
There are two ways to mitigate this (when you're committing, not now):
One of the big-files extensions, these essentially add big files to a secondary repository and link the two, so that if you update to a revision where the file doesn't exist, and you don't already have it, it will not get updated. ie. it's more a "on-demand" style of pulling
If the file never changes, keep it available on the network and just create some kind of link to it instead of a full copy
Right now, you got four options:
Strip away the changeset that added the file, and all the changesets that came after it. You can do that using the Mercurial Queues extension. Note that you need to do this stripping in all clones. If just one of your users push back the repository that has that file in its history to the central clone, you have the changesets back.
Rebuild the repository from scratch manually
Using the hg convert command and some filtering, the --filemap option can be used for this
Leave it as is. How big is it, will be much of a problem?
Note that rebuilding the repository, either manually or through hg convert will invalidate all clones. Anyone trying to push to your new central clone from an old clone will get a message about unrelated repositories. If any of your users are stupi^H^H^H^H^Hnot smart enough to realize that forcing the push is a bad idea, then you will have problems with this approach.
Yes, the file is still in the history. If you want to delete it completely, you need to use Mercurial Queues — see Editing History on Mercurial wiki.
Just keep in mind this breaks clones as revision IDs change.
I have a local mercurial repository with some site-specific changes in it. What I would like to do is set a couple files to be un-commitable so that they aren't automatically committed when I do an hg commit with no arguments.
Right now, I'm doing complicated things with mq and guards to achieve this, pushing and popping and selecting guards to prevent my changes (which are checked into an mq patch) from getting committed.
Is there an easier way to do this? I'm sick of reading the help for all the mq commands every time I want to commit a change that doesn't include my site-specific changes.
I would put those files in .hgignore and commit a "sample" form of those files so that they could be easily recreated for a new site when someone checks out your code.
I know that bazaar has shelve and unshelve commands to push and pop changes you don't want included in a commit. I think it's pretty likely that Mercurial has similar commands.
I'm curious why you can't have another repo checked out with those specific changes, and then transplant/pull/patch to it when you need to update stuff. Then you'd be primarily working from the original repo, and it's not like you'd be wasting space since hg uses links. I commonly have two+ versions checked out so I can work on a number of updates then merge them together.