Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in
  • A Analysis
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Issues 6
    • Issues 6
    • List
    • Boards
    • Service Desk
    • Milestones
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
  • Activity
  • Create a new issue
  • Issue Boards
Collapse sidebar
  • The Tor Project
  • Network Health
  • Metrics
  • Analysis
  • Issues
  • #31439
Closed (moved) (moved)
Open
Issue created Aug 17, 2019 by irl@irl⌨Owner

Identify poorly performing relays from historical OnionPerf data

This closes the loop on legacy/trac#31435 (moved) to help us validate that the poorly performing relays have indeed been excluded as we would hope, and then we can check that we're not excluding so much performance that it would degrade the experience for everyone.

This task does not depend on legacy/trac#31435 (moved) however and can be progressed independently.

Some questions to answer:

  1. Low hanging fruit: are there some trends we should be aware of before we dive into this data, for example are there some "peak hours" where things are slower, or do things get slower on weekdays/weekends.
  2. How much data do we have to look at to identify with some certainty that a relay is slow? (1 day, 1 week, 1 month, 1 year?)
  3. What is our metric for "poorly performing"?
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Assignee
Assign to
Time tracking