The project I'm working on using a lot of branches. I know there's been some changes to a particular file that I need on another branch, but I don't know what branch they were done on.
How can I find all changes done to a specific file, searching across all [named] branches?
I'm not sure if I'm misunderstanding the question, but hg log <filename> shows all changesets that include changes to a file, no matter what (named or unnamed) branch they're on.
Related
I've somehow marked a bunch of files as being "copy/renamed" instead of just marking them as new files.
I've also published these changes to our central repository and can cannot strip them out (we've already got some changes on top and I don't have the permissions/ability to fiddle with the central repository anyway).
Context:
I've actually done it to a whole bunch of files, but here's an example of what I did to one: I've copied a file called "Labels.properties", to a new file called "Labels_ja.properties" (the new file contains a subset of the original properties, and eventually all the values will be translated to a different language). The "Labels_ja.properties" file has been marked as a copy/rename of "Labels.properties" (the only way I've been able to see exactly what's happened is by looking with SourceTree, which shows "File copied/renamed from Labels.properties").
Our environment is a sort of central repository with automated "pull request" style merging tools built on top, so solutions involving all our developers magically knowing exactly how to drive Hg in order to resolve these conflicts aren't going to work - it's the scripts that are getting the merge conflicts.
These copy/renames are causing a lot of hassles: when people touch the original versions of the property files (they don't even know about the copied files yet) - it looks like Hg is trying to merge those changes onto the copies, but because the files are very different, those merges are failing with conflicts.
Problem:
What can I do to sort out all these merge conflicts that our automated merge scripts are getting?
Ideally, I'd like to go back in time and just mark all these files "new" - but there's no going back now that the changesets have been published.
Can I just make a big backout commit, then re-add the files (making sure that they are marked "new" and not "copy/renamed")?
I cannot think of a good solution. Even if you delete the files, Mercurial will still complain repeatedly when they get changed and merged. The simplest solution I can think of is moving the bad files into some "attic" directory where they don't hurt anyone and never touching them again. Then add the real files where they belong.
If I understand correctly, the problem is in fact that the developers are still based on a revision that does not have your move.
When they change the original file, an 'hg merge' will then smash together the changes from the developer and the changes to the new file in that new file.
I see a few solutions for this:
You can tell the developers to rebase their changes on top of the new changes. This will actually not work, since rebase will make the same mistake as merge.
The developers can turn their changes into patches and apply their patches on top of your changes. Their patches will refer to the original files, not the changed files, so this should work fine for the tools that merge later on.
The above flow can be done in a more sophisticated way using Mercurial Queues: import the existing developer commits as patches, then qpop all of these, update to your newer revision and qpush all of the revisions again.
Mark the head with your changes as 'closed' using '--close-branch'. However, if your tools are not equipped to handle this correctly (by ignoring the closed branch), this may cause issues. Branches are reopened when a new commit is done on top of the branch. So if you merge developer changes with the closed branch, it will open again.
We have moves and renames of files in our published mercurial history that were not properly recorded, so that they appear in the history as unrelated deletions and adds.
Is there any way to tell the repository about the connections so that --follow commands can work again?
(For non-pushed changes, here is a question discussing how to get mercurial to properly record moves/renames before you commit, as well as a useful tip here.)
One solution that works but is a bit brutal: You can login remotely on your central Mercurial server, fix the renames there as a local change and then ask everyone to clone the repo again.
This works since all Mercurial repos are equal. You just think of one as "central" but in fact it's a repo like any other. So if you have access to that, you can rewrite history there. The drawback is of course that every developer will notice, so they'll have to export any non-pushed changes they made, clone the repo again and then import the patch.
[EDIT] A possible workaround would be to create a new branch just before you did the bad rename, rename the files properly and then cherry pick all the changes after that into the new branch.
I'm not sure if it's a good idea to merge this branch into your original branch. If you did, then Mercurial would see two kinds of file renames in the past and I'm not sure which one it would follow.
So before you merge, I suggest that you create a small test/demo repo where you reproduce the situation and then try it out.
You can start a new head from before the rename, do the rename properly, then merge it into the head from the upstream server. It will give you a conflict. When this happens, revert the offending files to the revision from your new head (with the good renames; hg revert -r <rev> <files...>).
In our Mercurial repo we added a really big file (and did an hg push), then deleted the big file (and did another push).
Now if someone does an hg clone will they still pull down that big file? I know it won't appear in their working directory as it was deleted, but will the file still be pulled down and stored in Mercurial internal storage?
I'd like to ensure people don't have to pull down the file. I've learned that really big files should be stored outside of Mercurial, so I deleted the file. But I was wondering if people will still be pulling down the big file - in which case I guess I will recreate the repository from scratch.
Of course it will still be in the repository.
You can always update back to older revisions, and if you update back to the revision you got when you committed the file, it'll be there in all its glory.
There are two ways to mitigate this (when you're committing, not now):
One of the big-files extensions, these essentially add big files to a secondary repository and link the two, so that if you update to a revision where the file doesn't exist, and you don't already have it, it will not get updated. ie. it's more a "on-demand" style of pulling
If the file never changes, keep it available on the network and just create some kind of link to it instead of a full copy
Right now, you got four options:
Strip away the changeset that added the file, and all the changesets that came after it. You can do that using the Mercurial Queues extension. Note that you need to do this stripping in all clones. If just one of your users push back the repository that has that file in its history to the central clone, you have the changesets back.
Rebuild the repository from scratch manually
Using the hg convert command and some filtering, the --filemap option can be used for this
Leave it as is. How big is it, will be much of a problem?
Note that rebuilding the repository, either manually or through hg convert will invalidate all clones. Anyone trying to push to your new central clone from an old clone will get a message about unrelated repositories. If any of your users are stupi^H^H^H^H^Hnot smart enough to realize that forcing the push is a bad idea, then you will have problems with this approach.
Yes, the file is still in the history. If you want to delete it completely, you need to use Mercurial Queues — see Editing History on Mercurial wiki.
Just keep in mind this breaks clones as revision IDs change.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Mercurial: copying ONE file and its history to another repository
I have several repositories in my local machine.
One is my main code, another is an assortment of useful code/tools.
These are two fundamentally different repos. It might make sense to setup a new repo and pull these two in as sub-repos, but I want to wait until Mercurial devs mark sub-repos as non-experimental before I do that.
One of the useful code files has become so useful, I want to put it into my main code area...but I want to maintain its history. This will, of course, result in some variant of a fork, but that's acceptable. (best case would be being able to push-pull it back and forth and keep updating its history).
I'd just use the subrepo feature that came online in 1.3. It might change slightly, but you won't be left high and dry backwards compatibility wise.
If you can't bring yourself to so, then what you need to do is:
use hg convert with a filemap that deletes all files except the one you want and convert from the repo with the single useful file to a new repo containing only that file and all its history
then hg pull from the new single-file-full-history repo into the target repo
hg merge in the target repo and you'll have that file with all it's history
The other option would be to hg export the entire tools repo, use grepdiff (part of difftools) to limit to only one file, and then import into the target repo, but that's crazy.
The short answer is you can't copy a file and its history simply, as stated in this thread:
https://www.mercurial-scm.org/pipermail/mercurial/2009-April/025105.html
I'm relatively new to DVCS and you really have to think of each repo as a self contained package. Not like svn or p4, where you can hang different projects off the root and configure it how you like and do partial repo checkouts. (That said, I really like the flexibility of being able to clone repos quickly to try things out. And being able to commit on a local machine.)
I'm just looking at a similar problem. I've branched a repo to make changes and I only want one file out of one changeset. And it is nice to have the history.
You could look at:
hg cat
This would probably involve writing a script to transfer history, i.e. commit N changesets in the target repo with the hg cat results from the source. Wonder if there is an extension to do this?
You could get the log of the file you want to copy and paste that into a commit comment. It's not in the metadata, but you do have a record and all the hashes etc.
may be
hg export
also can help you.
I have a local mercurial repository with some site-specific changes in it. What I would like to do is set a couple files to be un-commitable so that they aren't automatically committed when I do an hg commit with no arguments.
Right now, I'm doing complicated things with mq and guards to achieve this, pushing and popping and selecting guards to prevent my changes (which are checked into an mq patch) from getting committed.
Is there an easier way to do this? I'm sick of reading the help for all the mq commands every time I want to commit a change that doesn't include my site-specific changes.
I would put those files in .hgignore and commit a "sample" form of those files so that they could be easily recreated for a new site when someone checks out your code.
I know that bazaar has shelve and unshelve commands to push and pop changes you don't want included in a commit. I think it's pretty likely that Mercurial has similar commands.
I'm curious why you can't have another repo checked out with those specific changes, and then transplant/pull/patch to it when you need to update stuff. Then you'd be primarily working from the original repo, and it's not like you'd be wasting space since hg uses links. I commonly have two+ versions checked out so I can work on a number of updates then merge them together.