Write and analyze proposals for transmitting microdescriptors with less bandwidth
The idea: I can think of a few ways to lower the amount of bandwidth we use for downloading microdescriptors, without actually fetching any fewer. Would any of them be worthwhile? We should analyze them and write proposals for them.
Do we frequently download small batches of microdescriptors? If so, fetching them in larger batches would get us better compression.
Do we frequently download small batches of microdescriptors? If so, zlib dictionaries would get us better compression.
When a client moves from one consensus to another, the set of microdescriptors that the client wants is almost determined by the difference between those two consensuses. (I say "almost" because the client may have other mds that occurred in even earlier consensuses.) Can we have clients download microdescriptors in batches that depend on the consensus, rather than batches that are named by the microdescriptor digests? This would have these benefits:
- HTTP requests would get much shorter.
- Batching many microdescriptors together would improve compression.
- Batching a ''predictable group'' of microdescriptors together would enable us to spend more CPU on compressing those groups, since we wouldn't need to compress so many different groups. (See #21211 (moved))