>>1558
In some ways, anyone who syncs with the public tag repo already has 1.7m files with 25m tag mappings. The client is fairly ambivalent about whether you actually have files on your hard drive or not–it mostly treats it as a different search domain that the gui may or may not be able to provide thumbnails for. You can do those searches right now by going 'all known files' and 'public tag repo' on the autocomplete. The only restriction is that you can't search for detailed file info like size or mime because the tag repo only stores/provides hash info.
That said, I understand there are some users with high six figures number of local files, and they don't report terrible problems. The main thing they wanted was multiple hard drive support, which was why I recently worked on that. My laptop has 280k files totalling 223GB and it runs ok.
I am confident the client is stable up to a very very high number of files and tags (although trying to display 80,000+ on a single search page seems to hit a memory limit somewhere), but the real problem is that it gets slow. Counting up autocomplete results and processing new mappings from a tag repo takes more CPU as taglist complexity and the number of files increases. Thankfully, this is something I can ultimately improve by drilling down to the slow part of my code and rewriting or extending it with caches and so on. All I need is profile reports and time to work. (I can also just wait 12 months until CPUs and memory run faster!)
Although if you personally experience instability when you increase your file count, let me know the details and we can try to figure out what is going on.