People often want to know how Xapian will scale. The short answer is "very well" - a previous version of the software powered BrightStation's Webtop search engine, which offered a search over around 500 million web pages (around 1.5 terabytes of database files). Searches took less than a second.
In a large search application, I/O will end up being the limiting factor. So you want a RAID setup optimised for fast reading, lots of RAM in the box so the OS can cache lots of disk blocks (the access patterns typically mean that you only need to cache a few percent of the database to eliminate most disk cache misses).
It also means that reducing the database size is usually a win. Quartz compresses the information in the tables in ways which work well given the nature of the data but aren't too expensive to unpack (e.g. lists of sorted docids are stored as differences with smaller values encoded in fewer bytes). This could be improved a little using a bitstream instead of a bytestream.
Another way to reduce disk I/O is to run databases through quartzcompact. Quartz usually leaves some spare space in each block so that updates are more efficient - quartzcompact makes all blocks as full as possible, and so reduces the size of the database. It also produces a database with revision 1 which is inherently faster to search. The penalty is that updates will be slow for a while, as they'll result in a lot of block splitting when all blocks are full.
In general, I'd suggest a strategy of splitting the data over several databases. Once each has finished being updated, run it through quartzcompact for searching.
A multiple-database scheme works particularly well if you want a rolling web index where the contents of the oldest database can be rechecked and live links put back into a new database which, once built, replaces the oldest database. It's also good for a news-type application where older documents should expire from the index.
You could take this idea further by implementing an ultra-compact read-only backend which would take a quartz b-tree and convert it to something like a cdb hashed file. The same information would be stored, but with less overhead than quartz b-trees (which need to allow for update). If you need serious performance, implementing such a backend is worth considering.
The quartz database backend (which is the backend you'd usually use) stores the indexes in several files containing B-tree tables. If you're indexing with positional information (for phrase searching) the term positions table is usually the largest.
The current limits are: