Skip to content
GitLab
  • Menu
Projects Groups Snippets
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in
  • H HTTPS Everywhere EFF
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Issues 251
    • Issues 251
    • List
    • Boards
    • Service Desk
    • Milestones
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
  • Activity
  • Create a new issue
  • Issue Boards
Collapse sidebar
  • The Tor Project
  • Applications
  • HTTPS Everywhere EFF
  • Issues
  • #1656
Closed
Open
Created Jul 02, 2010 by Peter Eckersley@pde

More efficient ruleset checking

Currently every request from the browser occurs an overhead that is O(N) where N is the number of rules (or rulesets with match_rules).

This is not good enough, especially if we intend to include all the rules people are submitting.

O(1) lookups should be possible. One way is a dictionary of target domains, with lookups looking something like this:

If the request is for content at blah.thing.com, we look up

.thing.com, thing..com, blah.thing.*

in the dictionary. For the time being, rulesets should be able to signal what domains they target in at least that level of specificity

Eg, Google.xml targets:

google.* www.google.* google.com.* www.google.com.* google.co.* www.google.co.*

BUT, if we ever had to worry about .google. this wouldn't be enough...

To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Assignee
Assign to
Time tracking