Improve our client HS descriptor fetch logic
(This was discovered because of #15937 (moved))
Considering a client that does 6 connections attempt to a .onion very fast, currently we'll do 6 fetches of the descriptor thus on all 6 HSDirs. Furthermore, once the descriptor arrives on the client, we do NOT terminate the 5 other requests...
I think we should do the following:
- Have parallel descriptor fetches because multiple requests improve our chances of getting the descriptor faster (maybe). I'm proposing that we use one third of the HSDirs set that is 2 HSDirs (one per replica) over our 6 in total. (I'm open also to half of the hsdirs, just that one third makes it easy to query replicas in a symmetric way.)
If they all fail, query an other sset or if only one fails, relaunch one immediately (so we avoid one HSDir stalling everything) so we always have one third of our set being queried.
Once the descriptor arrives, terminate the other pending HSDir fetch connections.
When a .onion request arrives on the SocksPort, check if we have pending descriptor fetch(es) and if yes, we should wait! don't launch more! If it's lower than our threshold (in this example 2/6), launch new fetches until we reach that limit and wait.
This should also cover the
HSFETCH control command with a .onion address which doesn't require to have a