When I run hg log -r remote/project I get the last commit on that bookmark.
How can I get a full list of commits from the head of that bookmark?
This is not (easily) possible in general. You can approximate it with hg incoming from an empty repository, but hg incoming actually does a complete pull of the difference and throws the contents away; it does not scale for large repositories. Any solutions that are both practical and general involve ssh-ing into the remote machine or setting up a separate server process on the remote machine.
An intermediate approach uses hg incoming --bundle FILE -T '' (the -T '' part is to suppress normal output). This will store the difference between your local version in an overlay repository called FILE; you can then use hg log -R FILE to perform normal log commands on the overlay repository (and you can also pull from it, as though it were a snapshot of the original remote). This still relies on you having a significant portion of the repository on your local machine, or it will result in a full download of the remote repository.
Related
It occurs quite often that I want to know the full status of my local copy of a project, compared to the remote repository. By full status, I mean the following:
Are there some uncommitted changes locally?
Are there some unpushed commits locally?
Are there some unpulled commits remotely?
Am I on head of default branch?
I know that I can use some graphical tool such as HgView or TortoiseHg, or even my IDE to deal with Mercurial repositories, but I find it more convenient to use CLI when working with several projects/repos at the same time.
The way I am doing currently is by using an alias
alias hg_full='hg incoming; hg outgoing; hg status'
If everything is fine (i.e. local synchronized with remote), I then ensure being on head of default by
hg update default
This approach is perfectly working, but when I work with a slow remote repository, it is quite annoying to wait for both the incoming and outgoing command to return before performing the update.
Is there some way (by the mean of an extension or a more advanced command) to get a full status summary of the local copy compare to remote repository without performing hg in and hg out sequentially?
I think hg summary --remote might be exactly what you're looking for:
$ hg summary --remote
parent: 1:c15d3f90697a tip
commit message here
branch: default
commit: 1 modified
update: (current)
remote: 1 or more incoming, 1 outgoing
You can save yourself some network traffic by doing hg incoming --bundle <filename>, which fetches the incoming changesets and stores them in a bundle file. You can then run hg outgoing (or hg pull) against the bundle file, which doesn't use the network at all.
hg incoming --bundle incoming.bundle # Creates the bundle
hg outgoing incoming.bundle
hg pull incoming.bundle
hg update default
I have been using the mercurial and Beyond Compare 4 tools together for about 2 weeks now and feel fairly confident in my usage, however I still seem to have a problem when comparing incoming changesets against my current local codebase. The problem is emphasized when I attempting a complicated merge.
Just to clarify, I am avoiding the use of tools such as TortoiseHg,
although I do have it installed. I am searching for feedback via cmd line operations only.
My current templated method to pull down the incoming changesets via the following ( as an [alias] )
hg in --verbose -T "\nchangeset: \t{rev}\nbranch: \t{branch}\nuser: \t\t{author}\ndate: \t\t{date(date,'%m-%d-%Y %I:%M%p')}\ndescription: \n\t{desc|fill76|tabindent}\n\n{files % ' \t{file}\n'}\n----------\n"
As an example, here is a simplified (and cleverly abstracted) block returned ::
changeset: 4685
branch: Feature-WI209825
user: Jack Handy <jhandy#anon.com>
date: 01-19-2015 10:19AM
description:
Display monkey swinging from vines while whistling dixie
Zoo/MonkeyCage/Resources/Localization.Designer.cs
Zoo/MonkeyCage/Resources/Localization.resx
Zoo/MonkeyCage/Utility/Extensions.cs
If I were to be comparing changes locally, I would simply use the following command ::
hg bcomp -r 4685 -r default <optional file name>
and then I would get an instance of Beyond Compare with a folder structure and files and I could just navigate accordingly to view the changes...however, when I attempt to do this with a changeset that has yet to be pulled into my local repository, I can't.
How do I diff incoming changesets with my local repository?
---- UPDATE --------------------------------
I pursued the idea of bundling the incoming changes and then trying to use BC4 to diff the bundle to any given branch/revision on my local repo.
hg in --bundle "C:\Sandboxes\Temp\temp.hg"
This creates a compressed file archive containing all the new changes.
Now I simply need to diff this bundle with my local, however am having difficulty optimizing this. Currently, I am using variations on the following command:
hg -R "C:\Sandboxes\Temp\temp.hg" bcomp -r default
Alas, I am still having difficulty perfecting this...any insight is appreciated.
I don't see how you can, since your local repository doesn't yet have that changeset, so mercurial can't create a local copy of the revision, as it doesn't have visibility of what the change actually is.
The -p flag to hg incoming will show you the patch for each revision, but that isn't what you want.
Why not just pull the remote changes anyway? It wont hurt unless you actually update. You can then do your diff in the normal way.
hg diff is a local operation.
But you can simply call hg incoming -p in order to obtain a diff view of what you're going to pull. See hg help incoming for more options and refinement (e.g. if you need to diff against a specific rev etc)
I use Mercurial on desktops, and then push local repositories to a centralized server. I noticed that this remote server does not hold local copies of files in its repositories (the directory is empty, except obviously for the .hg one).
What is the preferred way to populate these directories with local copies? (which in turn are used by various unrelated services on that server).
What I came up so far is to use a hook and hg archive to create a local copy. This would be a satisfactory solution but I need to configure a per-repository hgrc file (which is tedious but I did not find a way to centralize this in /etc/mercurial/hgrc). Maybe a global script (in /etc/mercurial/hgrc, run for each changegroup event)? (in that case how can I get the repository name to use in a if...then scenario?)
If you can get access to the remote repository, you could install a hook for when changegroups come in, and perform an hg update when that happens.
A quick check shows this in the FAQ (question 4.21), but to summarize/duplicate: edit the .hg/hgrc file on the remote repository, and add the following lines:
[hooks]
changegroup = hg update
Whenever the remote repository gets pushed to (or when it performs a pull), it will update to the latest changeset.
Some caveats - this may fail if any changes have been made to the files on the remote side (you could use hg update -C instead). Also, if you have pushed any anonymous branches (which you would have to consciously force), you may not update to what you want to update to.
So, for example, if there's a mercurial repository https://code.google.com/p/potentiallyLarge is there a command which would allow me to find out its size before cloning it? Something like
hg size https://code.google.com/p/potentiallyLarge
Also, is there a command for doing this for subversion repositories?
The size used on disk is different from the bandwidth used to make a clone. Some hosting sites (such as Bitbucket) display the size on disk so that you know upfront how much space you'll need on your system before cloning. But I can see that Google Code doesn't, so it wont help you here.
The Mercurial wire protocol doesn't expose any commands that can tell you how big a repository is. When you make a normal clone, the client doesn't know upfront how much data it will receive, it just receives a stream of data. After receiving the changelog, the client knows how many manifests and filelogs to expect, but it doesn't know the size of them.
In fact, it's difficult for the server to compute how much data a clone will use: the network bandwidth used is less than the disk space since the compression used is different (bzip2 vs gzip). However, if you use --uncompressed with your clone (which Google Code doesn't support) then there is a trick, see below.
The only way to know much bandwidth a clone uses is to make one. If you have a clone already you can use hg bundle to simulate a clone:
$ hg bundle --all my-bundle.hg
The size of the bundle will tell you how much data there is in the repository.
A trick: If Google Code had supported hg clone --uncompressed, then you could use that to learn the size of a remote repository! When you use --uncompressed, the client asks the server to send the content of the .hg/ directory as-is — without re-compressing it with bzip2. Conveniently, the server starts the stream by telling the client the size of the repository. So you can start such a clone and then abort it (with Control-C) when your client has printed the line telling you the size of the repo.
Update: My answer below is wrong, but I'm leaving it here since MG provided some good info in response. It looks like the right answer is "no".
Not a great way, but a work-around sort of way. A hg clone URL is really just hg init ; hg pull URL And the command hg incoming tells you what you'd get if you did a pull, so you could do:
hg init theproject
cd theproject
hg incoming --stat URL_TO_THE_PROJECT
and get a pretty decent guess of how much data you'll be pulling down if you follow up with:
hg pull URL_TO_THE_PROJECT
I'm not sure about the network efficiency of hg incoming but I don't think it downloads everything from all the changesets, though I could be wrong about that. It offers a --bundle option that saves whatever incoming pulls down to a file from which you can later pull to avoid double downloading.
After a change in my local repo, I commit with hg commit -m <message>, then I push to my server with hg push, then in my server I update working dir with hg update.
Is there a better way to do this?
The first two steps you have described:
hg commit -m <message>
hg push
are required as per the fact that commits are kept completely separate from the server in Mercurial (and most other DVCS as well). You could write a post-commit hook to perform the push after each commit, but this is not advised because it prevents you from correcting simple mistakes during the commit and before the push.
Because you're trying to perform an update on 'the server' I'm assuming you are executing a version of the code in your repository on the server. I'm assuming this because typically the server would simply act as a master repository for you and your developers to access (and also to be subject to backups, etc..), and would not need the explicit hg update.
Assuming you are executing code on the server, you can try and replace the push and the update with this command:
hg pull <path to development repo> -u
which will perform a pull from your local repo and then an automatic update. Depending on your server configuration, it might be difficult to get the path to your local repo.
For the first part of the question (ie. automatically push when you do a commit), you can use the trick described in this answer : mercurial automatic push on every commit .
If you want to automatically update the working directory, you can do this with a hook. Add this in the hgrc of your repository (.hg/hgrc on your server directory) :
[hooks]
changegroup = hg update >&2
This will automatically update the working directory every time a push is made to this server. This hook is described in the Mercurial FAQ.
If you use these 2 solutions, the next time you do hg commit -m "message", the commit will be automatically pushed to the remote server and the working directory on the server will be updated.
There is an extension called autosync you might find useful:
This extension provides the autosync command which automatically and continuously commits working copy changes, fetches (pull, merge, commit) changes from another repository and pushes local changes back to the other repository. Think of configuration files or to-do lists as examples for things to synchronize. On a higher level the autosync command not only synchronizes repositories but working copies. A central repository (usually without a working copy) must be used as synchronization hub: