I just committed changes to a mercurial repository (VS 2010 C#/WPF if that matters). I had to merge with the tip to push my changes and not create a new tip, so I shelved the rest of my changes and successfully merged. As I suspected, the merge wiped out the changes I still had to commit, so I went to go unshelf my changes, and I'm getting unshelf abort, and then failed at hunk 1, etc. for a bunch of files :\ The problem is they are all <filename>.g.cs, <filename>.runtimecache, etc., i.e. runtime stuff. There's only about 4 files that I need to recover from my shelf file.
Do I have any hopes of recovering those changes from my shelf file? There's a solid half day's work there and I'd hate to lose it.
I'm using TortoiseHg. Thanks!
If you are using version 2, you should be able to see the things you shelved and be able to unshelve only those you're interested in.
Related
I've somehow marked a bunch of files as being "copy/renamed" instead of just marking them as new files.
I've also published these changes to our central repository and can cannot strip them out (we've already got some changes on top and I don't have the permissions/ability to fiddle with the central repository anyway).
Context:
I've actually done it to a whole bunch of files, but here's an example of what I did to one: I've copied a file called "Labels.properties", to a new file called "Labels_ja.properties" (the new file contains a subset of the original properties, and eventually all the values will be translated to a different language). The "Labels_ja.properties" file has been marked as a copy/rename of "Labels.properties" (the only way I've been able to see exactly what's happened is by looking with SourceTree, which shows "File copied/renamed from Labels.properties").
Our environment is a sort of central repository with automated "pull request" style merging tools built on top, so solutions involving all our developers magically knowing exactly how to drive Hg in order to resolve these conflicts aren't going to work - it's the scripts that are getting the merge conflicts.
These copy/renames are causing a lot of hassles: when people touch the original versions of the property files (they don't even know about the copied files yet) - it looks like Hg is trying to merge those changes onto the copies, but because the files are very different, those merges are failing with conflicts.
Problem:
What can I do to sort out all these merge conflicts that our automated merge scripts are getting?
Ideally, I'd like to go back in time and just mark all these files "new" - but there's no going back now that the changesets have been published.
Can I just make a big backout commit, then re-add the files (making sure that they are marked "new" and not "copy/renamed")?
I cannot think of a good solution. Even if you delete the files, Mercurial will still complain repeatedly when they get changed and merged. The simplest solution I can think of is moving the bad files into some "attic" directory where they don't hurt anyone and never touching them again. Then add the real files where they belong.
If I understand correctly, the problem is in fact that the developers are still based on a revision that does not have your move.
When they change the original file, an 'hg merge' will then smash together the changes from the developer and the changes to the new file in that new file.
I see a few solutions for this:
You can tell the developers to rebase their changes on top of the new changes. This will actually not work, since rebase will make the same mistake as merge.
The developers can turn their changes into patches and apply their patches on top of your changes. Their patches will refer to the original files, not the changed files, so this should work fine for the tools that merge later on.
The above flow can be done in a more sophisticated way using Mercurial Queues: import the existing developer commits as patches, then qpop all of these, update to your newer revision and qpush all of the revisions again.
Mark the head with your changes as 'closed' using '--close-branch'. However, if your tools are not equipped to handle this correctly (by ignoring the closed branch), this may cause issues. Branches are reopened when a new commit is done on top of the branch. So if you merge developer changes with the closed branch, it will open again.
We have moves and renames of files in our published mercurial history that were not properly recorded, so that they appear in the history as unrelated deletions and adds.
Is there any way to tell the repository about the connections so that --follow commands can work again?
(For non-pushed changes, here is a question discussing how to get mercurial to properly record moves/renames before you commit, as well as a useful tip here.)
One solution that works but is a bit brutal: You can login remotely on your central Mercurial server, fix the renames there as a local change and then ask everyone to clone the repo again.
This works since all Mercurial repos are equal. You just think of one as "central" but in fact it's a repo like any other. So if you have access to that, you can rewrite history there. The drawback is of course that every developer will notice, so they'll have to export any non-pushed changes they made, clone the repo again and then import the patch.
[EDIT] A possible workaround would be to create a new branch just before you did the bad rename, rename the files properly and then cherry pick all the changes after that into the new branch.
I'm not sure if it's a good idea to merge this branch into your original branch. If you did, then Mercurial would see two kinds of file renames in the past and I'm not sure which one it would follow.
So before you merge, I suggest that you create a small test/demo repo where you reproduce the situation and then try it out.
You can start a new head from before the rename, do the rename properly, then merge it into the head from the upstream server. It will give you a conflict. When this happens, revert the offending files to the revision from your new head (with the good renames; hg revert -r <rev> <files...>).
I recently let the IDE replace a trivial text in the entire project, and recognized that mistake only after committing other changes to Mercurial. I panicked and (knowing very little about Mercurial, now after having read the definitive guide starting to get to know it better) tried every command that seemed to make my mistake "go away". It goes without saying that this was a move I am not proud of.
Of the things I remember to have tried was hg update tip and hg rollback. Since I'm using Mercurial on my local machine only and do not pull or push from any other repository, I think these commands did not cause my main problem: There are a lot of files missing now -exactly the files I let the IDE make the wrong replacements in.
What bothers me is that I have done hg status --change REV to find all files changed in a revision, and the deleted files do not show up there.
PHPStorm has a local history, which shows which files are now missing. That (only that?) enables me to hunt down the individual files and revert to their last known revision:
hg log -l 1 path/to/foo.txt
hg revert -r <my revision> path/to/foo.txt
... but that is way too time-consuming for the hundreds of files that got changed. Please tell me there's a better way. The PHPStorm history is nice and can restore the files as well, but it will restore them to the point where they had already been erroneously changed.
Your help is greatly appreciated, and I vow to learn & appreciate Mercurial as more than just a context menu item starting today.
If you are willing to lose the changes that were committed with or since the error, you may be able to go back to the revision just before the error, and start working from there. Use hg log to find out which revision you need, and hg update --rev XX to go to that revision. If you're not sure which revision you want, update to various revisions and take a look.
Once you have updated to the correct revision, you can just continue working from there. The next time you commit, you will automatically create a new branch on which you'll be working, which will not have the error in it. If you want, you can go back to the original branch and close it.
You might even be able to get back any correct commits that happened after you committed the error up to the revision you rolled back to. On the old branch, identify the revision after the error, and do a diff between that revision and the tip of that branch. Then, see if you can apply the diff as a patch on your new branch. You will still lose any changes that were in the same commit as the error, though.
I'm looking for a simple way to pull in additional commits after rebasing or a good reason to tell someone not to rebase.
Essentially we have a project, crons. I make changes to this frequently, and the maintainer of the project pulls in changes when I request it and rebases every time.
This is usually okay, but it can lead to problems in two scenarios:
Releasing from two branches simultaneously
Having to release an additional commit afterwards.
For example, I commit revision 1000. Maintainer pulls and rebases to create revision 1000', but at around the same time I realize a horrible mistake and create revision 1001 (child of 1000). Since 1000 doesn't exist in the target branch, this creates an unusable merge, and the maintainer usually laughs at me and tells me to try again (which requires me getting a fresh checkout of the main branch at 1000' and creating and importing a patch manually from the other checkout). I'm sure you can see how the same problem could occur with me trying to release from two separate branches simultaneously as well.
Anyway, once the main branch has 1000', is there anything that can be done to pull in 1001 without having to merge the same changes again? Or does rebasing ruin this? Regardless is there anything I can say to get Maintainer to stop rebasing? Is he using it incorrectly?
Tell your maintainer to stop being a jacka**.
Rebasing is something that should only be done by you, the one that created the changesets you want to rebase, and not done to changesets that are:
already shared with someone else
gotten from someone else
Your maintainer probably wants a non-distributed version control system, like Subversion, where changesets follows a straight line, instead of the branchy nature of a DVCS. In that respect, the choice of Mercurial is wrong, or the usage of Mercurial is wrong.
Also note that rebasing is one way of changing history, and since Mercurial discourages that (changing history), rebasing is only available as an extension, not available "out of the box" of a vanilla Mercurial configuration.
So to answer your question: No, since your maintainer insists on breaking the nature of a DVCS, the tools will fight against you (and him), and you're going to have a hard time getting the tools to cooperate with you.
Tell your maintainer to embrace how a DVCS really works. Now, he may still insist on not accepting new branches or heads in his repository, and insist on you pulling and merging before pushing back a single head to his repository, but that's OK.
Rebasing shared changesets, however, is not.
If you really want to use rebasing, the correct way to do it is like this:
You pull the latest changes from some source repository
You commit a lot of changesets locally, fixing bugs, adding new features, whatnot
You then try to push, gets told that this will create new heads in the target repository. This tells you that there are new changesets in the target repository that you did not get when you last pulled, because they have been added after that
Instead, you pull, this will add a new head in your local repository. Now you have the head that was created from your new changesets, and the head that was retrieved from the source repository created by others.
You then rebase your changesets on top of the ones you got from the source repository, in essence moving your changesets in the history to appear that you started your work from the latest changeset in the current source repository
You then attempt a new push, succeeding
The end result is that the target repository, and your own repository, will have a more linear changeset history, instead of a branch and then a merge.
However, since multiple branches is perfectly fine in a DVCS, you don't have to go through all of this. You can just merge, and continue working. This is how a DVCS is supposed to work. Rebasing is just an extra tool you can use if you really want to.
In our Mercurial repo we added a really big file (and did an hg push), then deleted the big file (and did another push).
Now if someone does an hg clone will they still pull down that big file? I know it won't appear in their working directory as it was deleted, but will the file still be pulled down and stored in Mercurial internal storage?
I'd like to ensure people don't have to pull down the file. I've learned that really big files should be stored outside of Mercurial, so I deleted the file. But I was wondering if people will still be pulling down the big file - in which case I guess I will recreate the repository from scratch.
Of course it will still be in the repository.
You can always update back to older revisions, and if you update back to the revision you got when you committed the file, it'll be there in all its glory.
There are two ways to mitigate this (when you're committing, not now):
One of the big-files extensions, these essentially add big files to a secondary repository and link the two, so that if you update to a revision where the file doesn't exist, and you don't already have it, it will not get updated. ie. it's more a "on-demand" style of pulling
If the file never changes, keep it available on the network and just create some kind of link to it instead of a full copy
Right now, you got four options:
Strip away the changeset that added the file, and all the changesets that came after it. You can do that using the Mercurial Queues extension. Note that you need to do this stripping in all clones. If just one of your users push back the repository that has that file in its history to the central clone, you have the changesets back.
Rebuild the repository from scratch manually
Using the hg convert command and some filtering, the --filemap option can be used for this
Leave it as is. How big is it, will be much of a problem?
Note that rebuilding the repository, either manually or through hg convert will invalidate all clones. Anyone trying to push to your new central clone from an old clone will get a message about unrelated repositories. If any of your users are stupi^H^H^H^H^Hnot smart enough to realize that forcing the push is a bad idea, then you will have problems with this approach.
Yes, the file is still in the history. If you want to delete it completely, you need to use Mercurial Queues — see Editing History on Mercurial wiki.
Just keep in mind this breaks clones as revision IDs change.