20
votes

Basically, what it says on the tin: TortoiseHg is slow.

My team moved from Subversion to Mercurial recently. (In part to take advantage of Kiln for Code Reviews) One of the things we've noticed is that interacting with Mercurial through TortoiseHg is painfully slow. Some stats:

  • Open TortoiseHg Workbench: 8 minutes 13 seconds
  • Response time when clicking on a revision: 2.8 seconds
  • Time to "Refresh Current Repository": 6.4 seconds
  • Time to check for incoming changes: 12.8 seconds

All this really adds up to a very slow feeling application. For reference, here are the command line tool times:

  • hg status: 4.573 seconds
  • hg incoming: 12.150 seconds

The command-line times seem to jive with the workbench times, but the workbench makes the delay much more frustrating, because it is synchronous with the use of the program. For example, a typical task is "get the latest stuff my coworker just pushed". It looks like this (only listing the time spent waiting on the computer, rounded):

  • Open TortoiseHg: 10 minutes.
  • Open the appropriate repository by double-clicking in the repository registry: 5 seconds.
  • Commit local changes that need committing:
    • Click on "Working Directory": 5 seconds.
    • Select important files and type a commit message.
    • Press Commit: 20 seconds.
  • Get coworker's changes:
    • Check for incoming changesets: 10 seconds.
    • Review them.
    • Accept incoming changesets: 40 seconds.
  • Shelve unready changes:
    • Open Shelve dialog: 2 seconds.
    • Shelve remaining files: 6 minutes
    • Refresh: 5 seconds.
  • Merge:
    • Click the other head: 3 seconds.
    • Merge with local:
    • Wait for "Clean" verification: 15 seconds.
    • Wait for merge (assuming no conflicts): 10 seconds.
    • Commit: 30 seconds.
  • Unshelve changes:
    • Open Shelve dialog: 2 seconds.
    • Unshelve: 6 minutes.
    • Refresh: 5 seconds.

Total: 24 minutes, 32 seconds.

Twelve of those minutes are spent shelving and unshelving. Ten are spent just opening. One consequence of this is that people tend to commit stuff they aren't sure will go anywhere just in order to avoid the shelving cost. But even if you assume no shelving and no opening cost (maybe you just leave it open), it still takes 2 and half minutes of meticulous clicking to get the latest stuff.

And that doesn't even count the more significant stuff like cloning and whatnot. Everything is this slow.

I have:

  • Disabled antivirus.
  • Disabled indexing.
  • Rebooted.
  • Tried it on 3 different versions of windows.
  • Tried in on varying hardware, most of it reasonable quality: Core 2 Duo @3.16 GHz, 8Gb Ram.
  • Tried it on 32 and 64 bit OSs.
  • Tried it disconnected from a network.

The repository is actually two repositories: a primary repo and a sub-repo that contains all our third-party binaries. The .hg folder of the primary repo is 676 MB. The .hg folder of the sub-repo is 641 MB. The contents of default in the primary repo is 7.05 GB. The contents of default in the sub-repo is 642 MB. The average file size in the main repo is 563 KB. The max file size in the main repo is 170 MB. There are 13,438 files in the main repo. The average file size in the sub-repo is 23KB. The max file size in the sub-repo is 132 MB. There are 57087 files in the sub-repo.

I have big-push, caseguard, fetch, gestalt, kbfiles, kiln, kilnauth, kilnpath, mq, purge, and transplant extensions enabled.

Any ideas where to start figuring out how to speed stuff up? The slowness is driving us crazy.

3
I wonder if it is trying to access / timing out on network drives? Maybe try running it on a machine not on the network, even if it is just to test startup time of the Workbench.prunge
It’s definitely not normal TortoiseHg speed, here it is lightning fast.Laurens Holst
What extensions do you have enabled? Are the repos tracked by Workbench on a local hard drive or on a network drive?Tim Henigan
Is using the mercurial binaries directly ( instead of tortoise ) making any difference?Geo
Ok, I've added answers to the question about repository size, and which extensions are enabled. None of the data is on a network drive at all. All the local clones are on local drives. The remote repos are accessed over ssh if it matters. (Though none of the slow stuff appears to be remote access stuff.) I did try disabling the network and performance does not appear to have been affected.alficles

3 Answers

28
votes

Ok, answering my own question because I found the answer while following Tim's advice.

The culprit is kbfiles from FogCreek. Disabling that dropped stat times from 12 seconds to .7 seconds. Likewise, the GUI opens faster than I can time. Re-enabling it causes everything to slow down drastically again.

It doesn't look like every slow thing can be blamed on kbfiles, but the worst of it can. (Specifically, shelve is still pretty slow -- CPU bound. We can work around that, though.)

2
votes

That is a ton of files... and some are awfully big. How does it perform without the larger files? Binary files aren't exactly the best thing to track with hg/git, in my humble opinion.

What about breaking the big repo up into smaller ones. Do they really need to be in 2 HUGE repos?

Maybe a defrag on the hard drives could slightly improve some of those times. Also look at the extensions that have been created to help deal specifically with big binary files. See here:

https://www.mercurial-scm.org/wiki/HandlingLargeFiles

1
votes

In some cases the advice given in the documentation may be useful for improving THG speed:

5.4.8. Performance Implications

There are some Workbench features that could have performance implications in large repositories.

View ‣ Choose Log columns…

  • Enabling the Changes column can be expensive to calculate on repositories with large working copies, causing both refreshes and scrolling to be slow.

View ‣ Load all

  • Normally, when the user scrolls through the history, chunks of changesets are read as you scroll. This menu choice allows you to have the Workbench read all the changesets from the repository, probably allowing smoother moving through the history.

In my own experience these are definitely worth doing! You should at least try them and see if there is a noticeable effect.


Also, if you have read Why is mercurial's hg rebase so slow? there is a setting which can speed up rebase significantly:

By default, rebase writes to the working copy, but you can configure it to run in-memory for better performance, and to allow it to run if the working copy is dirty. Just add following lines in your .hgrc file:

[rebase]

experimental.inmemory = True