Trac issueshttps://gitlab.torproject.org/legacy/trac/-/issues2020-06-13T15:05:26Zhttps://gitlab.torproject.org/legacy/trac/-/issues/21214Based on measurement of #21205, write/analyze additional proposals and ticket...2020-06-13T15:05:26ZNick MathewsonBased on measurement of #21205, write/analyze additional proposals and tickets for lowering bw usage for directory stuffBased on the measurements we get from #21205, we'll probably learn more about some actual bandwidth needs, and the circumstances when dir BW is overused. We should add tickets to fix the bugs, possibly with proposals, based on what we f...Based on the measurements we get from #21205, we'll probably learn more about some actual bandwidth needs, and the circumstances when dir BW is overused. We should add tickets to fix the bugs, possibly with proposals, based on what we find otu.Tor: 0.3.2.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/21213Write and analyze proposals for fetching consensuses/microdescriptors less fr...2020-06-13T15:05:25ZNick MathewsonWrite and analyze proposals for fetching consensuses/microdescriptors less frequently?**The idea**: Our current algorithm for deciding whether you need a new consensus is ad hoc; we just picked an interval more or less at random.
Depending on the results from #21205, we may learn that it's not as necessary as we had tho...**The idea**: Our current algorithm for deciding whether you need a new consensus is ad hoc; we just picked an interval more or less at random.
Depending on the results from #21205, we may learn that it's not as necessary as we had thought for a client to fetch consensuses and microdescriptors so often. If that's the case, we should have proposals and analyses for (optionally?) decreasing the frequency of our downloads.
There may be different results here for "busy" and "not so busy" clients.
Of course, the analysis needs to include the security impact.Tor: 0.3.2.x-finalNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/legacy/trac/-/issues/21212Write and analyze proposals for transmitting microdescriptors with less bandw...2020-06-13T15:05:25ZNick MathewsonWrite and analyze proposals for transmitting microdescriptors with less bandwidth**The idea:** I can think of a few ways to lower the amount of bandwidth we use for downloading microdescriptors, without actually fetching any fewer. Would any of them be worthwhile? We should analyze them and write proposals for them...**The idea:** I can think of a few ways to lower the amount of bandwidth we use for downloading microdescriptors, without actually fetching any fewer. Would any of them be worthwhile? We should analyze them and write proposals for them.
* Do we frequently download small batches of microdescriptors? If so, fetching them in larger batches would get us better compression.
* Do we frequently download small batches of microdescriptors? If so, zlib dictionaries would get us better compression.
* When a client moves from one consensus to another, the set of microdescriptors that the client wants is almost determined by the difference between those two consensuses. (I say "almost" because the client may have other mds that occurred in even earlier consensuses.) Can we have clients download microdescriptors in batches that depend on the consensus, rather than batches that are named by the microdescriptor digests? This would have these benefits:
* HTTP requests would get much shorter.
* Batching many microdescriptors together would improve compression.
* Batching a ''predictable group'' of microdescriptors together would enable us to spend more CPU on compressing those groups, since we wouldn't need to compress so many different groups. (See #21211)Tor: 0.3.2.x-finalNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/legacy/trac/-/issues/21211Write and analyze proposals for compressing consensus (diff)s with better alg...2020-06-13T15:05:24ZNick MathewsonWrite and analyze proposals for compressing consensus (diff)s with better algorithms**The idea:** Consensus documents are compressed with zlib, but nobody has to compress any given consensus more than once. Therefore, we can safely use more CPU compressing them, and save bandwidth on consensus downloads by switching to...**The idea:** Consensus documents are compressed with zlib, but nobody has to compress any given consensus more than once. Therefore, we can safely use more CPU compressing them, and save bandwidth on consensus downloads by switching to something else instead of zlib for consensuses.
This same analysis also applies to consensus diffs.
For this ticket, we should look at the code complexity and potential bandwidth savings here, and decide whether they are worth it.Tor: 0.3.1.x-finalAlexander Færøyahf@torproject.orgAlexander Færøyahf@torproject.orghttps://gitlab.torproject.org/legacy/trac/-/issues/21210Analyze, and maybe improve, consensus diff proposal2020-06-13T15:05:24ZNick MathewsonAnalyze, and maybe improve, consensus diff proposalWe should use stats to re-run our numbers on the consensus diff proposal (140), and see how much bandwidth we expect to save. We should consider the impact of this proposal alongside alternative or related proposals, such as ones that w...We should use stats to re-run our numbers on the consensus diff proposal (140), and see how much bandwidth we expect to save. We should consider the impact of this proposal alongside alternative or related proposals, such as ones that would cause clients to download the consensus less frequently.Tor: 0.3.1.x-finalNick MathewsonNick Mathewson