I have come across a problem that I "think" can only be resolved using patches.
I cloned a project from our main repository, made quite a few changes (updates, deletion of files & directory and additions) to it. These changes are not even committed. The problem is, project from the main repository has been deleted/removed and recreated as a new project (name is same, all the directory structures everything is same as before). I cloned that project again from the main repository and would like to transfer all my uncommitted changes to it.
I am still exploring the hg patch to resolve that. It would be helpful if someone could confirm that creating and adding a patch IS the right approach to this, any resources explaining the process would be of great help.
You're correct — a patch is what you need to transfer the information from one repository to another (unrelated) repository. This will work since the files are the same, as you note.
So, to transfer your uncommitted changes from your old clone, you do
$ hg diff -g > uncommited.patch
$ cd ../new
$ hg import --no-commit ../old/uncomitted.patch
That will restore the information saved in the patch. This includes information about files that are added or renamed in the old clone.
The following steps can be performed with a standard Mercurial install:
Commit the changes in your local repository. Note the revision number.
Use "hg export -r REV >patch.diff" to create a patch.
Clone the new repository.
Use "hg import patch.diff" to apply the patch to the new repository.
Example
C:\>hg init example
C:\>cd example
C:\example>echo >file1
C:\example>hg ci -Am file1
adding file1
C:\example>hg clone . ..\example2
updating to branch default
1 files updated, 0 files merged, 0 files removed, 0 files unresolved
C:\example>rd /s/q .hg
C:\example>hg init
C:\example>hg ci -Am same-but-different
adding file1
At this point example and example2 have identical contents, but the repositories are unrelated to each other due to deleting and reinitializing the .hg folder.
Now make some changes and commit them in one of the repositories, then export them as a patch:
C:\example>echo >>file1
C:\example>echo >file2
C:\example>hg ci -Am changes
adding file2
C:\example>hg export -r 1 >patch.diff
Below shows that the other repository can't pull the changes, because of the reinitialization. It can, however, apply the patch successfully:
C:\example>cd ..\example2
C:\example2>hg pull
pulling from c:\example
searching for changes
abort: repository is unrelated
C:\example2>hg import ..\example\patch.diff
applying ..\example\patch.diff
I would first make copies of everything so you have a way of backtracking.
Then, in the working copy with the changes, I would first delete the .hg directory, then copy in the .hg directory from the new repo. This basically transfers all of the changed files into the new repo without the need to delete any files and directories.
You will still need to tell the repo about whether to remove any files marked as missing. You will also have to handle renames manually. If this is a small number of operations, it's easier than trying to use the patch method.
Once this is done, commit your changes and push, if necessary.
seems like what you want is patch queues. In that you have uncommitted changes, and you want to pull from the new repo before committing them....
$ hg qinit -c # initialize mq for your repo containing the uncommitted changes
$ hg qnew name_of_patch # create patch that contains your uncommitted changes
$ hg qpop # resets your working dir back to the parent changeset
no worries though, your changes are safe and sound in .hg/patches/name_of_patch to see for yourself.....
$ cat .hg/patches/name_of_patch
now pull in the new repo
$ hg pull -u http://location.of.new/repo # pull in changes from new repo update working dir
$ hg qpush # apply your uncommitted changes to new repo
If you are lucky you will have no merge conflicts and you can go ahead and commit the patch by....
$ hg qfinish -a # change all applied patches to changeset
And then if you want....
$ hg push http://location.of.new/repo
If the repos are unrelated, just init a patch repo on your new repo. and manually copy the patch in and add it to .hg/patches/series file.
assuming patch was created. clone new repo
$ hg clone http://location.of.new/repo ./new_repo
init patch repo
$ cd ./new_repo && hg qinit -c
copy patch
$ cp ../old_repo/.hg/patches/name_of_patch .hg/patches/
edit series file using an editor of some sort
$ your_favorite_editor .hg/patches/series
name_of_patch # <---put this in the series file
apply your patch to new repo
$ hg qpush
if no merge conflicts and you are convinced it works
$ hg qfinish -a
If the layout is the same, you can just copy all the files over (excluding .hg) and then use hg addrem.
Try to look into the MQ plugin, it does exactly this if I recall. I've never had a use for that though, so I can't say.
If the old repository was simply moved/cloned to a new URL then you could simply change the remote repository you talk to the new one.
If, however, it was recreated from the ground up (even with the same structure) then I don't believe Mercurial has any built-in functionality to help you here. Mercurial patches reference specific changesets which won't exist in your new repository.
You could use a merge tool to perform the diff and bring across any changes you made.
Edited To answer the question in the comment:
When you clone the repository you are taking a complete snapshot of the entire change history - along with the associated change-set IDs, etc.
Mercurial tracks changes by change-sets to the repository, rather than at the file level like Subversion.
If you clone, then you can easily push/merge into another repository that was also cloned from the same source.
If you recreated the repository then the change IDs won't match, and can't be merged in Hg.
The only option in this scenario would be to use a Merge tool which will let you see mismatches in files/folder structure.
Also: Worth pointing out http://hginit.com/ because it explains (indirectly) some of this.
Related
I have read only permission to an hg repo and am trying to develop and test changes to it locally. The problem is that I am in the middle of changing dev machines and am caught in a weird/akward state across the two machines.
On my old machine I made lots of changes to the repo, locslly. I just cloned the repo on my new machine, but obviously that doesn't contain the changes from my old machine. I need a way to createe a patch/diff from my local working copy on my old machine, and then apply them to my local working copy on my new machine. The problem is that I already commited (hg commit -m "Blah") the changes on my old machine to the distributed repo on it.
What set of specific commands can I use to create a patch/diff of my old machine and then apply it to the repo on my new one?
Update
I commited all changes on my old machine and then ran hg serve, exposing http://mymachine.example.com:8000.
On my new machine, where I had made some different changes (locally) than the changes from my old machine, I ran hg pull http://mymachine.example.com:8000 and got:
myuser#mymachine:~/sandbox/eclipse/workspace/myapp$ hg pull http://mymachine.example.com:8000
pulling from http://mymachine.example.com:8000/
searching for changes
adding changesets
adding manifests
adding file changes
added 2 changesets with 16 changes to 10 files (+1 heads)
(run 'hg heads' to see heads, 'hg merge' to merge)
So I run hg merge:
myuser#mymachine:~/sandbox/eclipse/workspace/myapp$ hg merge
abort: uncommitted changes
(use 'hg status' to list changes)
What do I do now?!?
You can use:
$ hg diff > changes.patch
To create a patch file, then:
$ patch -p1 < changes.patch
To apply that patch file on your new machine.
Well, that's actually fantastic, mercurial is a distributed version control system and you do not need to go via any patch file at all: simply pull the changes from your old machine to your new machine:
hg pull URL
where URL can be any network URL or also ssh-login, e.g.
hg pull ssh://mylogin#old.maschine.box or hg pull path/to/old/repository/on/nfs/mount
`
Alternatively you can also use bundle and unbundle. They create bundles which can be imported in the new mercurial easily and keep all meta-information.
hg bundle -r XXX --base YYY > FILENAME
where YYY is a revision you know you have in your new repository. You import it into your new repo with hg unbundle FILENAME. Of course you can bundle several changesets at once by repeating the -r argument or giving a changeset range like -r X:Y.
The least comfortable method is a via diff or export:
hg export -r XXX > FILENAME or equivalent hg diff -c XXX > FILENAME where you need to import the result with patch -p1 < FILENAME or hg import FILENAME.
The easiest way is to do this is to ensure that all work on your old machine is committed. Then use this command on it from the base of your repo:
hg serve
which creates a simple http server on this repo. The monitor should state the name of the http URL it is serving.
On your new machine, just pull from that URL.
Once you've pulled your old changes you can stop the hg serve process with ^C.
The advantages of this method are that it is very quick, and that it works on just about any system. The ssh method is also quick, but it won't work unless your system is configured to use ssh.
Answer to Update
The OPs update is asking an orthogonal question about how to merge changes pulled from a server with local changes. If you haven't already done so, try to digest the information in this merge doc and this one.
Merging is for merging changesets. The error is happening because you have local changes that haven't been committed which mercurial can't merge. So the first thing to do is to commit your local changes, then you will be able to merge.
But before you merge, I strongly recommend that you are merging what you think you are merging. Either ensure there are only 2 heads, or specify which head you are merging with. When merging, you have to be at one of the heads you wish to merge; it's usually better to be at the head with the most changes since the common ancestor because the diffs are simpler.
After you've merged, don't forget to commit the merge. :-)
I have a project with 24 months of source control history in a Mercurial repository.
I've recently found some old tarballs of the project that predate source control, and i think they would be useful to import into the repository as "pre-historic" changesets.
Can i somehow add a parent to my initial commit?
Alternatively, is it possible to re-play my entire repository history on top of the tarballs, preserving all metadata (timestamps etc)?
Is it possible to have the new parent commits use the timestamps of these old tarballs?
You can use the convert extension to build a new repository where the tarballs are imported as revisions before your current root revision.
First, you import the tarballs based on the null revision:
$ hg update null
$ tar -xvzf backup-2010.tar.gz
$ hg addremove
$ hg commit -m 'Version from 2010'
$ rm -r *
$ tar -xvzf backup-2011.tar.gz
$ hg addremove
$ hg commit -m 'Version from 2011'
I'm using addremove above to give Mercurial a chance to detect renames between each tarball (look at the --similarity flag to fine-tune this and use hg rename --after by hand to help Mercurial further). Also, I remove all the files in the working copy before importing a new tarball: that way the next commit will contain exactly the snapshot present in the tarball you unpack.
After you've imported all the tarballs like above, you have a parallel history in your repository:
[c1] --- [c2] --- [c3] ... [cN]
[t1] --- [t2] --- [tM]
Your old commits are c1 to cN and the commits from the tarballs are t1 to tM. At the moment they share no history — it's as if you used hg pull -f to pull an unrelated repository into the current one.
The convert extension can now be used to do a Mercurial to Mercurial conversion where you rewrite the parent revision of c1 to be tM. Use the --splicemap flag for this. It needs a file with
<full changeset hash for c1> <full changeset hash for tM>
Use hg log --template '{node} ' -r c1 -r tM > splicemap to generate such a file. Then run
$ hg convert --splicemap splicemap . spliced
to generate a new repository spliced with the combined history. The repository is new, so you need to get everybody to re-clone it.
This technique is similar to using hg rebase as suggested by Kindread. The difference is that convert wont try to merge anything: it simply rewrites the parent pointer in c1 to be tM. Since there is no merging involved, this cannot fails with weird merge conflicts.
You should look at using rebase. This can allow you to make the changes the 2nd changeset on your repo ( you have to rebase from the 1st ).
https://www.mercurial-scm.org/wiki/RebaseExtension
However, note that if there are other clones of this repo existing ( such as for fellow developers, or on a repo server ), you will have issues with them pulling the revised repo. You will probably have to co-ordinate with the owners of those clone's to get all work into a single clone, rebase that clone, and then have everyone re-clone from the revised clone. You will also have to change the phase the of the changesets.
https://www.mercurial-scm.org/wiki/Phases
Honestly though, I would just add them to your 'modern-day' repo, I don't think making them pre-historic would give you any notable advantage over adding them to the top.
I am familiar with TFS and Vault, but having just started using Mercurial I seem to be getting into a bit of a mess.
Heres what I (think) I've done:
-Created a central repository on bitbucket.org
-On my desktop PC, cloned repository from bitbucket, added files, commit them, push them to bitbucket
-On my laptop, cloned repository from bitbucket, pulled files, added more files, commit them, push them to bitbucket
I've continued to add, edit etc on the different computers.
Now I've noticed that some files from each computer are not in the bitbucket repository, and therefore only in the local repository. No amount of pulling and pushing seems to get it into the bitbucket repository.
What is the most likely thing I've done wrong?
Is there a way to 'force' by changes up to the bitbucket repository?
Did they get into your local repository? I suspect not, i.e. they were new files that were not added to the commit. Use hg add to add them to the changeset before committing or whatever the equivalent is for whatever mercurial interface you're using.
Edit:
Here's the help from Mercurial:
C:\Users\Bert>hg add --help
hg add [OPTION]... [FILE]...
add the specified files on the next commit
Schedule files to be version controlled and added to the repository.
The files will be added to the repository at the next commit. To undo an
add before that, see "hg forget".
If no names are given, add all files to the repository.
...
See Mercurial: The Definitive Guide (a.k.a. the hg "red book") for more info:
http://hgbook.red-bean.com/read/mercurial-in-daily-use.html
Telling Mercurial which files to track
Mercurial does not work with files in your repository unless you tell it to manage them. The hg status command will tell you which files Mercurial doesn't know about; it uses a “?” to display such files.
To tell Mercurial to track a file, use the hg add command. Once you have added a file, the entry in the output of hg status for that file changes from “?” to “A”.
$ hg init add-example
$ cd add-example
$ echo a > myfile.txt
$ hg status
? myfile.txt
$ hg add myfile.txt
$ hg status
A myfile.txt
$ hg commit -m 'Added one file'
$ hg status
use "hg -v help add" to show global options
We're using a local central repository where everyone pushes to and pulls from. Until recently this repository only contained the .hg folder. Then someone went ahead and committed directly in the central repository creating an "island" changeset with no parent (parent = -1) nor child. The correct way would have been to add it in a local repository and push the changes.
Is there any way to get the working copy of the central repository to get back to the state where it only contain .hg and not be associated with a specific changeset?
The command:
hg update null
Updates a repository's working directory to the point before the first commit, so there are no files in the working directory and hg parents shows -1.
You'll still need to remove the commit if you don't want it, but that's a separate question/issue.
Since this is your local central repository and these are commands that edit history, please take every precaution, like trying things out on a copy of the repository first.
hg rollback (to remove the last repository change)
https://www.mercurial-scm.org/wiki/Rollback
Roll back the last transaction in a repository.
hg strip (to remove specific revisions)
https://www.mercurial-scm.org/wiki/Strip
hg strip rev removes the rev revision and all its descendants from a repository.
Also see:
https://www.mercurial-scm.org/wiki/EditingHistory
If you catch your mistake immediately (or reasonably soon), you can just use hg strip REV to roll back the latest (one or more) changes. ...
Edit: This answers the question in its original form. The OP has since edited the question.
Just delete everything except the .hg folder.
If I understand correctly what happened, someone changed into the repository directory and committed a single file as an 'added file' without first checking out any changeset from the active directory.
What you ended up with is something like the following history graph:
0:a43f 1:2843 2:bc81 3:2947
o ------ o ------ o ------ o
4:228f
o
where changeset 4:228f has the "null id" as its parent changeset.
Depending on whether you want to keep changeset 4:228f or not, you can follow one of the following strategies:
Option 1: Clone the repository without the offending change
You can create a new clone of the repository, specifying revision 3:2947 as the target revision. This will pull change 3:2947 and all its ancestor changesets into the new repository. Since the offending changeset is not linked into the ancestry of changeset 3:2947, it will not be part of the new clone.
I recommend saving the old repository first, and then moving everything over to a new clone, e.g. any repository-specific setup files you keep in its .hg/ directory.
If your current repository lives in /work/repo/foo, one way to do this would be:
$ cd /work/repo
$ mv foo foo.bak
$ hg clone -r 2947 foo.bak newfoo
Now copy over any .hg/ setup files, e.g. your original .hg/hgrc file:
$ cp foo.bak/.hg/hgrc newfoo/.hg/hgrc
Finally move the new foo repository in place:
$ mv newfoo foo
Option 2: Clone the repository and keep but rebase the change
Do the same as before, but before you move the newfoo repository in place, use the "hg export" command to extract a copy of the offending change from the old repository. Then check out a working copy of the tip-most changeset in the newfoo tree and import the file changes of the offending change as a normal changeset of your current history graph.
So, right before mv newfoo foo, save the patch of the offending change by typing:
$ cd /work/repo/foo.bak
$ hg export -r 228f --git > /tmp/patchfile.diff
then you can check-out the latest revision of the new repository, and re-import the patch on top of your current history:
$ cd /work/repo/newfoo
$ hg update --clean tip
$ hg import /tmp/patchfile.diff
If the offending changeset merely adds a new file, this should work fine and you are ready to move the new repository in place!
Before doing the final rename though, it may be a good idea to remove the working-copy files of the new repository:
$ cd /work/repo/newfoo
$ hg update --clean null
I am looking for best practices to do the following:
When I need to implement a feature or fix a bug, I am creating new Mercurial repository from the main one (a trunk).
Then, within some days, or weeks, I am implementing the task in newly created repository, making commits and periodically merging with trunk. After the code in new repository will pass all code reviews, I should provide a repository with all changes collapsed into single revision.
My common way to do this (rdiff extension should be enabled):
hg clone ~/repos/trunk ~/repos/new-collapsed
cd ~/repos/new-collapsed
hg diff ~/repos/new > new.diff
patch -p1 < new.diff
hg commit
This works almost well except when there are binary files present in the changes from ~/repos/new. Another way could be:
hg clone ~/repos/trunk ~/repos/new-collapsed
cd ~/repos/new-collapsed
hg pull ~/repos/new
hg update
hg rollback
then resolve possible conflicts and manually commit the changes
Both ways look for me somewhat ugly and non-native, so I am looking how this operation could be simplified. I've played with rebase extension, but seems its hg rebase --collapse command does not work with workflow described above.
Any ideas are welcome.
Sounds like a good case for mercurial queues.
I do something similar with the histedit extension.
My workflow is something like:
clone a central repo
commit incremental changes to local repo
clone my local repo to make collapsed repo
hg histedit and select/discard/fold the revisions as needed
hg push the collapsed repo to central repo
pull central repo to local or refresh local from scratch
I ensure that my local repo never gets pushed to the central repo by adding an invalid default-push path to the .hg/hgrc file in the local repo root directory.
Solved: Just add
[diff]
git = True
to your hgrc file, and then use my first solution with rdiff extension, replacing patch with hg import:
hg clone ~/repos/trunk ~/repos/new-collapsed
cd ~/repos/new-collapsed
hg diff ~/repos/new > new.diff
hg import new.diff
hg commit