More efficient ruleset checking

Currently every request from the browser occurs an overhead that is O(N) where N is the number of rules (or rulesets with match_rules).

This is not good enough, especially if we intend to include all the rules people are submitting.

O(1) lookups should be possible. One way is a dictionary of target domains, with lookups looking something like this:

If the request is for content at blah.thing.com, we look up

.thing.com, thing..com, blah.thing.*

in the dictionary. For the time being, rulesets should be able to signal what domains they target in at least that level of specificity

Eg, Google.xml targets:

google.* www.google.* google.com.* www.google.com.* google.co.* www.google.co.*

BUT, if we ever had to worry about .google. this wouldn't be enough...