Mercurial list all files changed during last update - mercurial

Is there a command to list all files that changed during the last update?
hg status --change tip
gives me just the files of the last changeset.
I could use
hg status --rev FROM --rev TO
but then I'd have to store the rev from before the update.

You could work out what will change prior to performing an hg update by doing a status and passing in the current and tip revisions: hg status --rev . --rev tip.
This will show you what files will change when you perform an update, but won't have any effect after the update, as mentioned by Tim Delaney. I am assuming that you want to know what has changed as you are doing an update after a pull? In which case it may be better to know what will change before you do the update anyway.

No. Mercurial does not store any information about the previous state of the working directory once an update has been performed. During an update it may store some state to help with recovering after an interrupted update, but nothing that would be useful for your use case.

Mercurial doesn't keep track of how often you run hg update, so it can't tell you which files were touched by the last update. However, it also doesn't mess with the modtimes of the affected files, so you can look at their timestamps to see what changed. To figure out which files were modified in the last five minutes, use
find . -mmin -5 -print
The alternative would be to record what mercurial will update, as #icabod suggested.

If after you do "hg update" you want to find out what changed during the update, you might try this:
First, use "hg log -l nn" to get a list of the revisions that are part of the most recent update.
Next, use "hg diff -r nn" to show all the differences since the revision before the update.
And, if you just want a summary of the files that changed, then use "hg diff --stat -r nn".

Related

Discarding local commits making current workspace same as central repo in Mercurial

I have a gigantic repo and it takes a while to clone. Every time I make a few commits and realize I have goofed up, I end up deleting the current clone and re-cloning the repo. While this works, it is very very time consuming. Is there any command that I can use to just discard all my local changes and make my working folder look like my last pull?
You have a few options and both below assume that the changes only exist locally in your repo:
Have an additional local reference clone that only ever represents what the remote repo looks like. Then you can delete your current throwaway repo and reclone locally from your reference copy, which is much, much faster.
Utilize the strip function which will let you trim off branches of history. Please be very careful deleting history since it really is a double edged sword.

Splitting a mercurial commit with renames and edits into two commits (first rename, then edit)

Is there a way I can modify the history in mercurial in order to split one commit into two separate commits?
The first of these should contain just renames/moves and the second should contain the edits. This would help with interoperability with other version control systems (e.g. perforce).
I'm hoping it's possible to automate this process with a script.
It's possible
With manual work
Using MQ extension
Fist we convert commit to MQ-patch, second - split into 2 pathes, last - qfinish patches into permanent changesets

Mercurial: Merging with uncommitted changes?

I am trying to learn the merging and conflict-resolution workflow in mercurial. Am I supposed to commit any uncomitted changes in my working directory before I merge with another changeset?
What would happen if I merge before committing changes in my working directory?
Merging with uncommitted changes is a fundamentally unsound action. A merge can go wrong and when it does you want to be able to revert to your previous state, which is only possible if those changes are committed. If you can't bear to create a new changeset at that time commit them into a Mercurial Queue.

How can I manually trigger a pull from Mercurial in a Jenkins/Hudson job?

I've set up a job in Jenkins that polls my Mercurial repository, using the Mercurial plugin. This works well once I push to the repository. I can trigger a build of the job manually, but I can't trigger the hg pull/update that happens as part of the poll, which means I have to wait up to 60 seconds for the build to start with my new changes. Sometimes I'm pushing changes that I know will affect and possibly break the system build and want faster feedback. What's the best way to pull/update before a manual build?
Is your quiet period set? You can change that to 0 to trigger builde immediately (http://jenkins-ci.org/content/quiet-period-feature)
Also, you could have two jobs, one that you have right now, and a second one that only polls for changes. The "poller" could trigger your current job when it sees changes ("Build after project").
I would suggest if your having issues with update/pull with hg. what you can do is is the use the execute shell to update your build since your shooting off your build manaually. Then you can have the job build periodically; so it will cause your pull to happen whatever you set you build period to be. You wont have to worry about polling your SCM.
It seems that I am wrong. I must have missed something in the log when I was initially testing this, or maybe I hit the manual build link before the push went through to the server. Jenkins seems to perform a hg incoming then hg unbundle then hg update at the start of each build, even when the build is triggered manually, which is exactly what I wanted.

How do you search across multiple mercurial repositories

I'm setting up multiple Mercurial repositories for all of our different projects, probably close to 50. Is there a way to search across multiple repos for a specific file or string? For example, say that a database column is renamed, how would I search each repository for any reference to the old column name? I know that I can do this for each repository individually, but if you've got 50 repositories, that can be quite time consuming?
If it's not possible, are there any best practices then for structuring your repositories to minimize this pain?
It's not possible -- repositories are entirely independent.
If you wanted to set up a global search one way to do it would be to push all your 50 repositories into a single repository that can be searched. Something like this:
hg init everything
for therepo in /path/to/repos/* ; do
hg -R everything pull -f $therepo
done
Then you can search in everything. You could keep everything current using a cron job or changegroup hooks on your other repos that do a push to everything.