Skip to content

GitLab

  • Menu
Projects Groups Snippets
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in
  • Trac Trac
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Issues 246
    • Issues 246
    • List
    • Boards
    • Service Desk
    • Milestones
  • Monitor
    • Monitor
    • Metrics
    • Incidents
  • Analytics
    • Analytics
    • Value stream
  • Wiki
    • Wiki
  • Activity
  • Create a new issue
  • Issue Boards
Collapse sidebar
  • Legacy
  • TracTrac
  • Issues
  • #7026

Closed (moved)
(moved)
Open
Created Oct 03, 2012 by Aaron Johnson@amj703

Adversary-based metrics

I'd like to evaluate on Tor security metrics that are based on adversary models. Some adversary models to consider are

  1. An adversary that can compromise k relays.
  2. An adversary that can contribute b bandwidth.
  3. An adversary that can observe k Autonomous Systems. The security metrics for Tor based on these models that are relevant to Tor include
  4. The probability that the adversary observes the entry and exit traffic of a given connection.
  5. The probability distribution of the number of connections of a user for which the adversary observes entry and exit traffic (guard selection is particularly important here).
  6. The "linkability" of different connections.
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Assignee
Assign to
Time tracking