However in my Vertico package (and in other continuously updating
UIs), the big bottleneck of the UI still is the sorting for many
candidates, even when including optimizations.
Therefore I am using a vertico-sort-threshold there.
Maybe there are potential improvements on a lower level?
If O(N log N) is still too slow, then I think it's safe to say that the
problem is that N is too large: we can try and shave off a factor of `c`
or even the `log N` by optimizing the implementation, but that just
pushes the "too large" a bit further and sooner or later you'll have to
bite the bullet and introduce some "threshold" beyond which you reduce
the functionality.
In theory, if we want to optimize the speed as much as possible without
reducing the functionality, we could try to:
- first partition the set of candidates between those that appear in the
history and those that don't. This is linear time.
- sort the ones that appear in the history based on their position
there: no need to check length or alphabetic order in this case.
This is O(N log N) but the N should be significantly smaller.
- If you have enough candidates already to fill the display you can stop
at this point and just use those candidates.
- the remaining candidates can be sorted by their length, putting
together same-length candidates into sublists. This could even be
more-or-less linear time with some kind of bucket sort.
- Finally sort each of those sublists according to lexicographic order
This is again O(N log N) but again the N should be significantly
smaller and we can stop as soon as we've found enough candidates to
fill the display.