Sponsor4 - Core Tor Team Objective
Objective 1: Empower users with limited access to powerful devices and fast networks to access and more easily use secure, private networks in faster, more stable online interactions.
SubObjective 1.1: Reduce Tor overhead for low-bandwidth scenarios.
Coordination between various components of the Tor Network and Tor clients creates a lof of processing overhead for clients, including browsers and chat applications. This can make communicating and interacting over Tor seem painfully slow for users. By reducing one particular kind of network "housekeeping" communication, we believe we can noticeably speed up client interactions, with a noticeable corollary improvement in user experience.
Currently, network servers called Directory Authorities each hour send search Tor client a file listing available relays on the network. This process is called consensus. By changing the size and frequency of consensus communications, we should significantly reduce tor's processing overhead.
The core Tor team has already drafted or begun drafting several proposals for improving how consensus works for low-bandwidth users, including:
- **Transmitting only changes ("diffs") to relay availability (likely bandwidth reduction: 30-60%). **
- More effectively compressing the list file (additional reduction: 8-12%).
- Reducing the volume of summarized relay server descriptions (microdescriptors) that clients download to complete relays connections.
Under this subojective, we will review these proposals and select the most promising options to addressing slow-access problems.
Activity:
Improve the Directory Authority consensus part of the Tor network in order to optimize
Implementation plan - draft
Under this sub-objective, we are reducing our bandwidth overhead for low-bandwidth scenarios as much as possible, with a focus on the directory protocol.
MASTER TICKETS FOR MEASUREMENT AND DESIGN: #21205, #21209,
Here are some questions we should work on answering soon, to focus our work.
- What actually is the bandwidth overhead for a few simple client situations?
- How much of that is in the directory protocol? In what parts?
- How "micro" and "slow-changing" are microdescriptors?
- How often does the average client actually fetch a consensus IRL?
- Is our current approach to "when do we build circuits" good enough?
- Do unused clients actually go dormant wrt downloads in the way we would like them to?
Here are the mitigations we could begin doing on a "near" timeframe:
- Consensus diffs (proposal 140, code in-progress).
- Improved consensus compression (needs proposal)
- Avoiding small microdesc downloads. (needs proposal)
- additional zlib tuning (needs writeup)
- Microdesc "diff" downloads. (needs proposal)
- less frequent consensus fetches (needs proposal)
But here's what we should do first:
- Measure! #21205. Begin in January, start having results within 2 weeks, coding done within 1 month, all results done within 2 months.
- Design! #21209. Begin in January, finish within 2 months, depending on measurement.
- Implement #21215 . Begin within a month. Additional time TBD based on previous steps.
- Allow post-implementation time for debugging, and further improvements TBD as we find them.
I've opened subtickets for21205 and 21209. Some of the items that are labeled "low-bandwidth" or "sponsor4" may become subtickets of #21215, but we don't know yet.