I am using TortoiseHg and I have some changesets in draft mode and now due to some reasons I am in a situation to use a different machine.
So, is there any way to like take a backup and restore my changesets on a new machine?
Since I'm not sure about fetching the draft mode changsets on a different machine.
If you already committed, but not pushed the changesets, you can just copy the .hg direcotry to the other machine. If you have uncommitted changes, you have to copy the entire directory where your repository and .hg directory is in.
And no, you can't fetch draft mode changes on another machine. They are available once you pushed (status "public").
You can do this regardless of the phase (public/draft/secret) of the changeset - I do it all the time at my work using patches.
For this you will need the "mq" extension (installed but not enabled by default) turned on from your settings. (actually, you can do this without mq)
Take the following steps, working your way from the first draft changeset (i.e. the one whose parent is public) to the last draft changeset (i.e. the one which is at the head):
Right click on a single changeset in the source repository and select Export/Copy patch, the output of which you can paste into a text file. Repeat this n times to get the patches you want to copy.
Transfer those text files to the other machine, then select Repository/Import on the target repository then import the changeset. Repeat this (in the order the changesets were exported) to recreate the same history as on the source repository.
Related
This is currently a purely theoretical question (related to this one), but let me first give the background. Whenever you run hg gexport the initial hash will vary from invocation to invocation. This is similar to when you run git init or hg init. However, since the Mercurial and Git commits correspond to each other and build on previous hashes, there should be some way to start over from a minimal common initial state (or minimal state on the Git side, for example).
Suppose I have used hg-git in the past and now I am trying to sync again between my Mercurial and my Git states, but without (or very little of) the original .git directory from the hg gexport. What I do have, though, are the two metadata files: git-mapfile and git-tags.
There is an old Git mirror, which is sort of "behind" and the Mercurial repo which is up-to-date.
Then I configure the Mercurial repo for hg-git like so (.hg/hgrc):
[git]
intree = True
[extensions]
hgext.bookmarks=
topic=
hggit=
[paths]
default = ssh://username#hgserver.tld//project/repo
gitmirror = git+ssh://username#server.tld/project/repo.git
If I now do the naive hg pull gitmirror all I will gain is a duplication of every existing commit on an unrelated branch with unrelated commit history (and the double amount of heads, compared to prior to the pull).
It clearly makes no big difference to place those two metadata files (git-mapfile and git-tags) into .hg. The biggest difference is that the pull without these files will succeed (but duplicate everything) and the pull with them will error out at the first revision because of "abort: unknown revision ..." (even makes sense).
Question: which part(s) and how much (i.e. what's the minimum!) of the Git-side data/metadata created by hg gexport do I have to keep around in order to start over syncing with hg-git? (I was unable to find this covered in the documentation.)
The core metadata is stored in .hg/git-mapfile, and the actual Git repository is stored in .hg/git or .git dependending on intree. The git-mapfile is the only file needed to reproduce the full state; anything else is just cache. In order to recreate a repository from scratch, do the following:
Clone or initialise the Mercurial repository, somehow.
Clone or initialise the embedded Git repository, e.g. using git clone --bare git+ssh://username#server.tld/project/repo.git .hg/git.
Copy over the metadata from the original repository, and put it into .hg/git-mapfile.
Run hg git-cleanup to remove any commits from the map no longer known to Mercurial.
Pull from Git.
Push to Git.
These are the steps I'd use, off the top of my head. The three last steps are the most important. In particular, you must pull from Git to populate the repository prior to pushing; otherwise, the conversion will fail.
I'm trying to merge the latest revision of our main branch into a much older branch. There's only two files that have conflicts, but the conflicts are complicated and I'd like to manually copy the changes from the more recent revision and fix some things. There's been a tonne of commits since the last commit into the old branch and I don't know when those two files were changed.
Using TortoiseHg, how can I find the latest revision on any branch where a particular file was changed?
From Windows Explorer, right click on the file whose history you are interested in.
In the TortoiseHG menu, select "Revision History":
This will bring up a window showing only the changesets which have modified that file (in any branch). It should also show history across tracked file renames (if the hg log "follow" option is enabled in hgrc), copies, and moves.
You can also get to the same thing from within the THG Workbench application, from the lower files list, where it is called "File History":
Either will bring you to:
Furthermore, command line equivalents of this screen would be to use hg log file and hg annotate file.
I'm a newbie in using mercurial and control version system in general,
I know that:
There exists a Central Remote repository
Everybody has a local repository, obtained by cloning the central one creating a branch
Everyone makes changes to a working space area, these changes are then committed to the local repository.
After changes have been performed locally, they are merged with the centralized repository or the centralized repository is overwritten by means of a rebasing
In mercurial when we make a pull and two conditions hold:
the central repository has been changed since we made the last pull.
we updated our local repository since the last pull.
A merge occurs in the local repository.
Mercurial sometimes wants me to make a manual merge, some other times the merge is managed automatically,I would like to know in what situation this happens.
Your workflow is just terrible, your mercurial-lingo is dirty
Read hg help merge
... Returns 0 on success, 1 if there are unresolved files
by checking this exit-code you can always know results of (even unattended) merge and perform needed operations.
But BEWARE:
Rebase uses repeated merging to graft changesets from one part of
history (the source) onto another (the destination).
thus - if you have exit-code 1 on merge, you, most probably, will get merge-conflict also on rebase (and the need of handwork anyway)
See also this question.
Without knowing what I was doing, I enabled the largefiles extension, committed a file and pushed it to kiln. Now I know the error of my ways, and I need to permanently revert this change.
I followed the guidance from SO on the subject; and I can remove largefiles locally, but this doesn't affect the remote repos in kiln. I have tried opening the repo in KilnRepositories on the Kiln server and nuking the largefiles folder (as well as deleting 'largefiles' from the requires file), but after a few pushes/pulls the folder and the require's line come back.
Is there a way to make this permanent? (Setting requires to readonly doesn't work either).
Note: This is true at least of (for Windows) TortoiseHg 2.4 (Mercurial 2.2.1) -> 2.7.1 (Mercurial 2.5.2). I will not speak for future or older versions.
After looking through the various mercurial extensions available, I have concluded that it is generally not possible to convert a repository 'back' once a file has been committed using the largefiles extension.
First, the two rationales for why you do not want largefiles in play on your repos: one and two.
Once a file has has been committed as a largefile, to remove it, all references to the '.hglf' path must be removed from the repo. A backout isn't sufficient, as its commit contents will reference the path of the file, including the '.hglf' folder. Once mercurial sees this, it will write 'largefiles' back to the /.hg/requires file and the repo is once more largefile locked. Likewise with hg forget and remove.
Option 1: If your repo is in isolation (you have end-to-end control of the repo in all of its local and remote locations AND no one else has branched from this repo), it may be possible to use the mq extension and strip the changeset. This is probably only a viable option if you have caught the mistake in time.
Option 2: If the offending changeset (the largefile commit) exists on a commit phase that is draft (or that can be forced back into draft), then it may be possible to import the commit to mq and unapply the changeset using hg qpop. This is superior to stripping, because it preserves the commit history forward from the extracted changeset. In real life, this is frequently not possible, because you've likely already performed merges and pushed/pulled from public phase branches. However, if caught soon enough, mq can offer a way to salvage the repo.
Option 3: If the offending changeset is referenced in one and only one place (the original commit), and no one has attempted to backout/remove/forget the changset (thus creating multiple references), it may be possible to use hg rebase, to fold the first child changeset after the offense with the parent changeset of the offense. In doing so, the offensive changeset becomes a new head which can then be stripped off with mq strip. This can work where attempts to import to mq have failed.
Option 4: If none of the above work, you can use transplant or graft, to export all of the non-offending changesets as patches (careful to export them in the correct sequence), then hg update to the first sane changeset before the offense, mq strip the repo of all forward changesets and then reapply your exported patches in sequence.
Option 5: (What I ultimately did). Clone the repo locally, so that you have two copies: clone_to_keep, clone_to_destroy. In clone_to_keep, update to the first sane changeset before the offense. Mq strip all forward changesets. Merge back down if left with multiple heads. In clone_to_destroy, update to the tip. In windows explorer, copy everything in /clone_to_destroy except the .hg and .hglf folders to the /clone_to_keep folder. Inside Tortoise, commit all changes in clone_to_keep as a single changeset. Preserve one remote instance of clone_to_destroy in a readonly state for historical purposes, and destroy all others.
Option 6: The nuclear option. If all else fails, and if you don't care about the integration of your repo with external systems (bug tracking, CI, etc), you can follow the aforementioned SO post and use the hg convert extension. This will create a new copy of the infected repo, removing all references to the offending changesets; however, it does so by iterating each changeset in the entire repo and committing it to the new repo as a NEW changeset. This creates a repo which is incompatible with any existing branch repos--none of the changeset ids will line up. Unless you have no branch repos, this option will probably never work.
In all cases, you then have to take your fix and manually reapply to each distinct repository instance (copy the repo folder, clone, whatever your preferred method).
In the end, it turns out that enabling largefiles is an extremely expensive mistake to make. It's time consuming and ultimately destructive to fix. I don't recommend ever allowing largefiles to make it into your repos.
We've just recently switched over from SVN to Mercurial, but now we are running into problems with our workflow. Example:
I have my local clone of the repository which I work on. I'm making some highly experimental changes to our code base, something that I don't want to commit before I'm sure it works the way it is supposed to, I don't want to commit it even locally. Now, simultaneously, my co-worker has made some significant improvements/bug fixes which I need. He pushes his commits to our main repository. The question is, how can I merge his changes to my workspace without the requirement that I have to commit all my changes, since I need his changes to test my own code?
A more day-to-day problem we have with the exact same workflow is where we have a couple of configuration files which are in the repository. Each developer makes a couple of small environment specific changes to the configuration files, but do not commit the changes. These couple of uncommitted files hinders us from making any merges to our workspace, just like with the example above. Ideally, the configuration files probably shouldn't be in the repository, unfortunately, that's just how it has to be for here unnamed reasons.
If you don't want to clone, you can do it the following way.
hg diff > mylocalchanges.txt
hg revert -a
# Do your merge here, once you are done, import back your local mods
hg import --no-commit mylocalchanges.txt
There are two operations, as you've discovered, that makes changes from one person available to someone else (or many, on either side.)
There's pulling, which takes changes from some other clone of the repository and puts them into your clone.
There's pushing, which takes changes from your repository and puts them into another clone.
In your case, your coworker has pushed his changes into what I assume is your central master of the repository.
After he has done this, you can pull the latest changes down into your repository, and merge them into your branch. This will incorporate any bugfixes or changes your coworker did into your experimental code.
This gives you the freedom of staying current on other coworkers development in your project, and not having to release your experimental code until it is ready (or even at all.)
So, as long as you stay away from the Push command, you're safe.
Of course, this also assumes nobody is pulling directly from your clone of the repository, if they do that, then of course they will get your experimental changes, but it doesn't sound like you've set it up this way (and it is highly unlikely as well.)
As for the configuration files, the typical way to do this is that you only commit a master file template into the repository, with a different name (ie. an extra extension .template or similar), and then place the name of the real configuration file into the ignore filter.
Each developer then has to make his or her own copy of the template, rename it, and change it in any way they want, without the risk of committing database connection strings, passwords, or local paths, to the repository.
If necessary, provide a script that will help the developer make the real configuration file if it is long and complex.
Regarding your experimental changes, you should commit them. Often.
Simply you commit them in a clone you don't push. You only pull to merge whatever updates you need from other repos.
As for config files, don't commit them.
Commit template files, and script able to generate complete config files from the template.
That way, developers will only modify "private" (i.e. not committed) config files with their own private values.
If you know your uncommitted changes will not collide with the merge commit that you are creating - then you can do the following...
1) Shelve the uncommitted changes
2) Do the pull and merge
3) Unshelve the uncommitted changes
Shelf effectively stores your uncommitted changes away as into diff (relative to your last commit) then rolls back those files in your local workspace. Then un-shelving then applies that diff, bringing back your uncommitted changes.
Tools such as TortoiseHg have shelf built in.