Disable automatic mercurial rollback execution - mercurial

I have an issue when trying to clone the repository to local machine via LAN.
At some point an error occurs and all downloaded data are erased during rollback.
Is it possible to turn off the mercurial's automatic rollback on error even though the downloaded data may be corrupted?

You can't stop it from rolling back -- it won't leave it in an inconsistent state, however you can do the clone incrementally.
Rather than:
hg clone http://path/to/your/repo
do:
hg clone -r 100 http://path/to/your/repo
hg pull -r 200 http://path/to/your/repo
hg pull -r 200 http://path/to/your/repo
... and so on until done ...
That gets 100 changesets at a time. If you have a network failure you'll only have to re-un the last command and eventually you'll get through.
As a note, once you've cloned this to a machine, even once, you never have to do it again. Instead clone from your local repo if you want another clone.
hg clone myclone my-other-clone

Related

Full status of Mercurial local repository

It occurs quite often that I want to know the full status of my local copy of a project, compared to the remote repository. By full status, I mean the following:
Are there some uncommitted changes locally?
Are there some unpushed commits locally?
Are there some unpulled commits remotely?
Am I on head of default branch?
I know that I can use some graphical tool such as HgView or TortoiseHg, or even my IDE to deal with Mercurial repositories, but I find it more convenient to use CLI when working with several projects/repos at the same time.
The way I am doing currently is by using an alias
alias hg_full='hg incoming; hg outgoing; hg status'
If everything is fine (i.e. local synchronized with remote), I then ensure being on head of default by
hg update default
This approach is perfectly working, but when I work with a slow remote repository, it is quite annoying to wait for both the incoming and outgoing command to return before performing the update.
Is there some way (by the mean of an extension or a more advanced command) to get a full status summary of the local copy compare to remote repository without performing hg in and hg out sequentially?
I think hg summary --remote might be exactly what you're looking for:
$ hg summary --remote
parent: 1:c15d3f90697a tip
commit message here
branch: default
commit: 1 modified
update: (current)
remote: 1 or more incoming, 1 outgoing
You can save yourself some network traffic by doing hg incoming --bundle <filename>, which fetches the incoming changesets and stores them in a bundle file. You can then run hg outgoing (or hg pull) against the bundle file, which doesn't use the network at all.
hg incoming --bundle incoming.bundle # Creates the bundle
hg outgoing incoming.bundle
hg pull incoming.bundle
hg update default

Mercurial diff/patch by example

I have read only permission to an hg repo and am trying to develop and test changes to it locally. The problem is that I am in the middle of changing dev machines and am caught in a weird/akward state across the two machines.
On my old machine I made lots of changes to the repo, locslly. I just cloned the repo on my new machine, but obviously that doesn't contain the changes from my old machine. I need a way to createe a patch/diff from my local working copy on my old machine, and then apply them to my local working copy on my new machine. The problem is that I already commited (hg commit -m "Blah") the changes on my old machine to the distributed repo on it.
What set of specific commands can I use to create a patch/diff of my old machine and then apply it to the repo on my new one?
Update
I commited all changes on my old machine and then ran hg serve, exposing http://mymachine.example.com:8000.
On my new machine, where I had made some different changes (locally) than the changes from my old machine, I ran hg pull http://mymachine.example.com:8000 and got:
myuser#mymachine:~/sandbox/eclipse/workspace/myapp$ hg pull http://mymachine.example.com:8000
pulling from http://mymachine.example.com:8000/
searching for changes
adding changesets
adding manifests
adding file changes
added 2 changesets with 16 changes to 10 files (+1 heads)
(run 'hg heads' to see heads, 'hg merge' to merge)
So I run hg merge:
myuser#mymachine:~/sandbox/eclipse/workspace/myapp$ hg merge
abort: uncommitted changes
(use 'hg status' to list changes)
What do I do now?!?
You can use:
$ hg diff > changes.patch
To create a patch file, then:
$ patch -p1 < changes.patch
To apply that patch file on your new machine.
Well, that's actually fantastic, mercurial is a distributed version control system and you do not need to go via any patch file at all: simply pull the changes from your old machine to your new machine:
hg pull URL
where URL can be any network URL or also ssh-login, e.g.
hg pull ssh://mylogin#old.maschine.box or hg pull path/to/old/repository/on/nfs/mount
`
Alternatively you can also use bundle and unbundle. They create bundles which can be imported in the new mercurial easily and keep all meta-information.
hg bundle -r XXX --base YYY > FILENAME
where YYY is a revision you know you have in your new repository. You import it into your new repo with hg unbundle FILENAME. Of course you can bundle several changesets at once by repeating the -r argument or giving a changeset range like -r X:Y.
The least comfortable method is a via diff or export:
hg export -r XXX > FILENAME or equivalent hg diff -c XXX > FILENAME where you need to import the result with patch -p1 < FILENAME or hg import FILENAME.
The easiest way is to do this is to ensure that all work on your old machine is committed. Then use this command on it from the base of your repo:
hg serve
which creates a simple http server on this repo. The monitor should state the name of the http URL it is serving.
On your new machine, just pull from that URL.
Once you've pulled your old changes you can stop the hg serve process with ^C.
The advantages of this method are that it is very quick, and that it works on just about any system. The ssh method is also quick, but it won't work unless your system is configured to use ssh.
Answer to Update
The OPs update is asking an orthogonal question about how to merge changes pulled from a server with local changes. If you haven't already done so, try to digest the information in this merge doc and this one.
Merging is for merging changesets. The error is happening because you have local changes that haven't been committed which mercurial can't merge. So the first thing to do is to commit your local changes, then you will be able to merge.
But before you merge, I strongly recommend that you are merging what you think you are merging. Either ensure there are only 2 heads, or specify which head you are merging with. When merging, you have to be at one of the heads you wish to merge; it's usually better to be at the head with the most changes since the common ancestor because the diffs are simpler.
After you've merged, don't forget to commit the merge. :-)

"stream ended unexpectedly" on clone

I try to clone but I get a rollback. I could clone at another computer previously but now I get a rollback and I don't know why:
C:\Users\Niklas\montao>hg clone https://niklasr#bitbucket.org/niklasr/montao
http authorization required
realm: Bitbucket.org HTTP
user: niklasr
password:
destination directory: montao
requesting all changes
adding changesets
adding manifests
adding file changes
transaction abort!
rollback completed
abort: connection ended unexpectedly
C:\Users\Niklas\montao>
Currently I'm just trying to do it over again but I suspect that it wil ltime out, can you tell me how to debug more what's happening and possibly resolve the issue? I ran it in debug mode and this is what happens.
adding google_appengine/lib/django_1_3/django/contrib/localflavor/locale/mn/LC_M
ESSAGES/django.mo revisions
files: 10223/50722 chunks (20.15%)
transaction abort!
Your TCP connection to bitbucket is dying before the whole repo is downloaded -- probably a flaky net connection or a full disk. If it's the former you can do it in small chunks using -r like this:
hg init montao
cd montao
hg pull -r 50 https://niklasr#bitbucket.org/niklasr/montao # get the first 50 changesets
hg pull -r 100 https://niklasr#bitbucket.org/niklasr/montao # get the next 50 changesets
...
That should only be necessary if something's wrong with your network route to bitbucket or the repository is incredibly huge.
An easier syntax compared to Ry4an Brase's answer:
hg clone -r 1 https://niklasr#bitbucket.org/niklasr/montao # get the first 1 changeset
cd montao
hg pull -r 50 # first 50 changesets
hg pull -r 100 # first 100 changesets
...
hg pull # all remaining changesets
hg update # create working copy
If you're using TortoiseHg Workbench, I found checking "Use compressed transfer" under Options in the Clone dialog worked for me.

Is it safe to update after error in the middle of clone?

Getting some zlib error in the middle of cloning big (4 GB) repo from Mercurial (Kiln).
What should(may) I do next? Delete and try from the beginning, or can I just hg pull -u?
Will local repository be in consistent state after some error in the middle of cloning?
update to clarify question: clone of repository was successful, but clone of some subrepository failed. Does this change anything?
If you encounter an error while cloning a big repository, then Mercurial will automatically abort the transaction. When a transaction is rolled back, Mercurial will clean up everything. For hg clone, this unfortunately means that the changesets that were already downloaded are gone. So you can safely re-clone.
However, from the way you put your question, it sounds like there is something left after the abort. So I guess you started a hg pull that was aborted mid-way? The same applied to pull: an abort will roll back the transaction and you can safely re-start the hg pull.
An aborted pull looks like this:
$ hg pull http://localhost:8000
pulling from http://localhost:8000/
searching for changes
adding changesets
transaction abort!
rollback completed
abort: stream ended unexpectedly (got 12 bytes, expected 503)
I started hg serve on my machine and started a pull from that server. I then killed hg serve in the middle of the pull. The client aborted and rolled back the transaction.

How to do a quick push with Mercurial

After a change in my local repo, I commit with hg commit -m <message>, then I push to my server with hg push, then in my server I update working dir with hg update.
Is there a better way to do this?
The first two steps you have described:
hg commit -m <message>
hg push
are required as per the fact that commits are kept completely separate from the server in Mercurial (and most other DVCS as well). You could write a post-commit hook to perform the push after each commit, but this is not advised because it prevents you from correcting simple mistakes during the commit and before the push.
Because you're trying to perform an update on 'the server' I'm assuming you are executing a version of the code in your repository on the server. I'm assuming this because typically the server would simply act as a master repository for you and your developers to access (and also to be subject to backups, etc..), and would not need the explicit hg update.
Assuming you are executing code on the server, you can try and replace the push and the update with this command:
hg pull <path to development repo> -u
which will perform a pull from your local repo and then an automatic update. Depending on your server configuration, it might be difficult to get the path to your local repo.
For the first part of the question (ie. automatically push when you do a commit), you can use the trick described in this answer : mercurial automatic push on every commit .
If you want to automatically update the working directory, you can do this with a hook. Add this in the hgrc of your repository (.hg/hgrc on your server directory) :
[hooks]
changegroup = hg update >&2
This will automatically update the working directory every time a push is made to this server. This hook is described in the Mercurial FAQ.
If you use these 2 solutions, the next time you do hg commit -m "message", the commit will be automatically pushed to the remote server and the working directory on the server will be updated.
There is an extension called autosync you might find useful:
This extension provides the autosync command which automatically and continuously commits working copy changes, fetches (pull, merge, commit) changes from another repository and pushes local changes back to the other repository. Think of configuration files or to-do lists as examples for things to synchronize. On a higher level the autosync command not only synchronizes repositories but working copies. A central repository (usually without a working copy) must be used as synchronization hub: