TortoiseHG commits are very slow - mercurial

Commits used to be rather fast, but now they take maybe 10 seconds. Other TortoiseHG operations such as update and push are reasonably fast, but commits have been slow lately. My repo has about 2600 commits; could it need some sort of reindexing to make it faster again? Or is committing always slow on such old repositories?

2600 changesets isn't "big repo" in common. Except ordinary OS-level management tasks, try to check state of repo with hg verify and consult Repository Corruption wiki for possible ways of eliminating of problems

Related

Is it safe to remove bundles in strip-backup folders?

Recently I have rewrote a lot of history (Forgive me Father, for I have sinned). Our old repository had a lot of sensitive information as well as unnecessary merges (up to 20 anonymous branches running simultaneously and being merged back indiscriminately), so I have striped several commits, pruned dead branches, rebased / squashed commits, rolled back unnecessary merges, created bookmarks, etc.
We now have a clean repo. I have also run unitary tests along several revisions to make sure that I haven't broke anything import. Yesterday I've forked the old repo (for backup purposes) and pushed the clean repository upstream. We are a small team and synchronizing changes was not a problem, every developer in my team is already working with the new repo.
Anyway, my local repository now have a .hg/strip-backup folder of around 2 Gigabytes.
From what I was able to understand, this folder contains backup bundles for every one of the destructive commands that I have run. I no longer need those.
My question is: Is it safe to remove the bundles inside .hg/strip-backup? Or will I corrupt my local repository if I delete those files?
Bonus question: Is there a built-in mercurial command to remove backups or should I just use rm .hg/strip-backup/*?
Yes, it is safe to remove the whole folder. The information contained in the folder is not relevant to the repo.
As a bonus answer, your best option to clean-up the cache folders is to simply re-clone the repo. Doing so allows you to start fresh and all the temporary files will be left on the base repo. Replace the original repo with a cloned repo and you won't have to bother with this history of temporary files for a while.

Discarding local commits making current workspace same as central repo in Mercurial

I have a gigantic repo and it takes a while to clone. Every time I make a few commits and realize I have goofed up, I end up deleting the current clone and re-cloning the repo. While this works, it is very very time consuming. Is there any command that I can use to just discard all my local changes and make my working folder look like my last pull?
You have a few options and both below assume that the changes only exist locally in your repo:
Have an additional local reference clone that only ever represents what the remote repo looks like. Then you can delete your current throwaway repo and reclone locally from your reference copy, which is much, much faster.
Utilize the strip function which will let you trim off branches of history. Please be very careful deleting history since it really is a double edged sword.

Mercurial - a simple way to lock a repository

My scenario:
A set of shared repositories needs to be locked for a given time so a process can run for a given time. After this process is done, I want to unlock the repositories. It's a process not on the repositories, but on a different system.
The repositories are not what the process is working on. I just need a time frame where the repositories are "protected". I just need to make sure the repositories don't change while this process is running.
I want a simple way to lock a repository, so no one can push to it.
If I manually create a .hg/store/lock file with a dummy content, do you see any problem with it?
Initial testing shows it works, but I'm concerned that I might not be aware of the implications.
If you just need to generally deny access to the repos for a given period, then you can do it that way. There shouldn't be any side-effects or other consequences.
Clone the repository and then run your process against the cloned repo.

How do you search across multiple mercurial repositories

I'm setting up multiple Mercurial repositories for all of our different projects, probably close to 50. Is there a way to search across multiple repos for a specific file or string? For example, say that a database column is renamed, how would I search each repository for any reference to the old column name? I know that I can do this for each repository individually, but if you've got 50 repositories, that can be quite time consuming?
If it's not possible, are there any best practices then for structuring your repositories to minimize this pain?
It's not possible -- repositories are entirely independent.
If you wanted to set up a global search one way to do it would be to push all your 50 repositories into a single repository that can be searched. Something like this:
hg init everything
for therepo in /path/to/repos/* ; do
hg -R everything pull -f $therepo
done
Then you can search in everything. You could keep everything current using a cron job or changegroup hooks on your other repos that do a push to everything.

Mercurial, do I need a server for team-work or can I just create a repository on a network share?

If I want to set up a smallish Mercurial repository for some internal work among a few developers, can I just navigate to a network share and create a repository there, and then just clone that down locally? Or do I need to set up a server (I know, it's easy to do).
This is Windows by the way.
Specifically, I'm wondering if there will be concurrency issues, like abandoned transactions, etc. if multiple users work push/pull simultaneously.
So long as folks are interacting with the repo using only 'clone', 'push', and 'pull', you're in fine shape. What you can't do is have multiple people committing directly from a shared working directory. However, push, pull, and clone are safe to use to a shared folder from a user's personal repository. All changes end up effectively atomic, and no aborted work should cause anyone any problems.
When creating that clone consider using clone -U so it's created without a working directory so folks aren't tempted to edit and commit there.
There's no reason I can think of why you wouldn't be able to do so. I do something similar, only I don't use CIFS, but ssh to access the files. No server setup to speak of in either case.
The only thing that came to mind as a possible problem was concurrent access, but you can see for yourself that Mercurial takes care not to allow users to step on each other's toes.