Tor issueshttps://gitlab.torproject.org/tpo/core/tor/-/issues2020-08-17T14:06:05Zhttps://gitlab.torproject.org/tpo/core/tor/-/issues/22304Support generating HS private key / onion address without publishing2020-08-17T14:06:05ZsegfaultSupport generating HS private key / onion address without publishingWhile developing Tails Server, we encountered the need to know the onion address of a service before making it available via Tor. It would be awesome if this could be achieved via the control port, e.g. with a `DontPublish` flag to `ADD_...While developing Tails Server, we encountered the need to know the onion address of a service before making it available via Tor. It would be awesome if this could be achieved via the control port, e.g. with a `DontPublish` flag to `ADD_ONION`.https://gitlab.torproject.org/tpo/core/tor/-/issues/21502create exitnode socksportoption since .node.exit is bad idea2020-07-28T02:48:50ZTraccreate exitnode socksportoption since .node.exit is bad ideaIf user needs to fight country censored hosts only, you could previously use special hostname. Now you can set for example ExitNode {US}
Well if you like to use more than one facist service, or regular browsing without limiting node sele...If user needs to fight country censored hosts only, you could previously use special hostname. Now you can set for example ExitNode {US}
Well if you like to use more than one facist service, or regular browsing without limiting node selection. One actually sadly needs to spawn additional client instances.
**Trac**:
**Username**: acceleraTorTor: unspecifiedhttps://gitlab.torproject.org/tpo/core/tor/-/issues/21424Treat directory guard success only as a partial success for the guard?2020-07-28T02:24:40ZNick MathewsonTreat directory guard success only as a partial success for the guard?Right now, we treat having received data from a directory guard as that guard having succeeded. But this could be trouble: it doesn't actually mean that the guard will make circuits nicely. We could use a notion of 'partial success' fo...Right now, we treat having received data from a directory guard as that guard having succeeded. But this could be trouble: it doesn't actually mean that the guard will make circuits nicely. We could use a notion of 'partial success' for guards, possibly? Or a separate directory/circuit success track?Tor: unspecifiedhttps://gitlab.torproject.org/tpo/core/tor/-/issues/21413Exits can get the Exit flag without having any ports in their microdescriptor...2022-06-22T15:51:15ZteorExits can get the Exit flag without having any ports in their microdescriptor port summaryAlmost all clients, relays, and authorities use microdescriptors by default.
Microdescriptor port summaries include a port if it exits to almost all IPv4 addresses (blocks no more than an IPv4 /7).
But the Exit flag is given if at leas...Almost all clients, relays, and authorities use microdescriptors by default.
Microdescriptor port summaries include a port if it exits to almost all IPv4 addresses (blocks no more than an IPv4 /7).
But the Exit flag is given if at least two of ports 80, 443, 6667 exit to at least an IPv4 /8.
This means an Exit can get the Exit flag, without having any of these ports in its IPv4 exit policy summary.
I suggest we only award the Exit flag if an Exit has at least two of ports 80, 443, 6667 in its IPv4 Exit policy summary.
This also requires a spec change for the Exit flag.https://gitlab.torproject.org/tpo/core/tor/-/issues/21237Support domain isolation for onion connections too?2022-06-17T23:07:49ZRoger DingledineSupport domain isolation for onion connections too?Right now there's a timing channel leak between isolation domains, where one isolation domain can get some hints about whether I've been to a certain onion domain lately, because if I have (and I have a cached onion descriptor, and/or an...Right now there's a timing channel leak between isolation domains, where one isolation domain can get some hints about whether I've been to a certain onion domain lately, because if I have (and I have a cached onion descriptor, and/or an open rendezvous circuit) then it will load faster.
If we tagged intro and rendezvous circuits with their socks isolation domains, and we tagged cached onion descriptors with their socks isolation domains, then we could remove this timing channel -- but at the cost of a bunch more work and delays for connections that are already high-work and high-delay.
I'm not sure if it's worth it on the Tor side, especially since this is just a timing channel. But I bet somewhere out there are Tor Browser users who are expecting the tab isolation to work, and I fear that it doesn't (fully) when it comes to onion services.https://gitlab.torproject.org/tpo/core/tor/-/issues/21006Reduce NumDirectoryGuards to 12022-03-22T13:24:37ZNick MathewsonReduce NumDirectoryGuards to 1Right now, it sits at 3, but asn makes some good points about reducing it in discussions on legacy/trac#20831Right now, it sits at 3, but asn makes some good points about reducing it in discussions on legacy/trac#20831https://gitlab.torproject.org/tpo/core/tor/-/issues/20676Disposable Exit Nodes2020-07-27T23:25:27ZTracDisposable Exit NodesJust like bridges are designed to get past blocking of guard nodes, it would be great to see a similar concept but this time for exit relays. I'm suggesting making available the possibility to setup "throwaway exit relays" that,
1. last...Just like bridges are designed to get past blocking of guard nodes, it would be great to see a similar concept but this time for exit relays. I'm suggesting making available the possibility to setup "throwaway exit relays" that,
1. last for a specific period of time (say 1 day),
2. from where one can configure to access specific sites (for example, one uses the Tor Button to use Exit X to access www.example.com),
3. aren't -- obviously -- published in the official directories, but it can be seen whether such IP was an exit at a time t using the standard tool for that purpose, but this information is not disclosed until after the ending period of the disposable exit relay,
4. can be made either public (available to the Tor Project to distribute them, just like it does with bridges) or private.
I think that even if the public type may be extremely difficult, especially seeing how it is difficult to even have the rate of new bridges being greater than the rate at which the adversary learns them, it would still be a good idea to have the private type.
**Trac**:
**Username**: madystarTor: unspecifiedhttps://gitlab.torproject.org/tpo/core/tor/-/issues/20555Stream isolation for DNS2022-06-17T16:28:03ZadrelanosStream isolation for DNSSeems like Tor's DNS cache (`CacheIPv4DNS`, `CacheIPv6DNS`) and caching of hidden service descriptors is cached globally.
The first connection in stream one resolves all DNS or hidden service descriptors. But follow up connections in se...Seems like Tor's DNS cache (`CacheIPv4DNS`, `CacheIPv6DNS`) and caching of hidden service descriptors is cached globally.
The first connection in stream one resolves all DNS or hidden service descriptors. But follow up connections in separate streams to the same website do not resolve and use Tor's cache.
So webservers could provide a slightly unique version of their website per visitor. Each visitors browser could be instructed to load additional content from varying hostnames. Due to caching vs non-caching it might be possible to make visitors pseudonymous rather than anonymous.
The problem is that Tor's cache is global and not stream isolated.https://gitlab.torproject.org/tpo/core/tor/-/issues/20524Revise initial descriptor upload behavior for onion services2020-06-27T13:57:57ZtwimRevise initial descriptor upload behavior for onion servicesAccording to `rend-spec.txt`:
```
When uploading descriptors, the hidden service needs to make sure that
descriptors for different clients are not uploaded at the same time (cf.
Section 1.1) which is also a limiting factor for t...According to `rend-spec.txt`:
```
When uploading descriptors, the hidden service needs to make sure that
descriptors for different clients are not uploaded at the same time (cf.
Section 1.1) which is also a limiting factor for the number of clients.
```
At the moment it's unclear how it should be implemented and why.
* What is the threat model here?
* How exactly descriptors should be uploaded?
* In what range delays should be set?
* How this will work along with absent delays after legacy/trac#20082?Tor: 0.3.2.x-finalDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/20371Lower HSDir query backoff delay2020-08-03T13:09:25ZtwimLower HSDir query backoff delayAt the moment this value (`REND_HID_SERV_DIR_REQUERY_PERIOD`) equals 15m.
It's pretty long. E.g. if one tries to reach an onion service which descriptor has not yet been published (due to chaotic uptime [ricochet], etc) they're unable to...At the moment this value (`REND_HID_SERV_DIR_REQUERY_PERIOD`) equals 15m.
It's pretty long. E.g. if one tries to reach an onion service which descriptor has not yet been published (due to chaotic uptime [ricochet], etc) they're unable to reach this service for 15m. See some discussion about it in legacy/trac#20082.
Should it be 15m or something less? If something else then why?https://gitlab.torproject.org/tpo/core/tor/-/issues/20262Onion services startup time always gets revealed2020-06-27T13:58:07ZtwimOnion services startup time always gets revealedDue to dead code in `rend_consider_services_upload()` startup time of onion services always gets revealed.
If service descriptor is not uploaded yet we add random delay from [rendinitialpostdelay;rendinitialpostdelay+rand(2*rendpostper...Due to dead code in `rend_consider_services_upload()` startup time of onion services always gets revealed.
If service descriptor is not uploaded yet we add random delay from [rendinitialpostdelay;rendinitialpostdelay+rand(2*rendpostperiod)] ( [30s;30s+2h] ):
```
if (!service->next_upload_time) {
service->next_upload_time =
now + rendinitialpostdelay + crypto_rand_int(2*rendpostperiod);
```
But this delay is useless when we're checking whether we should upload:
```
if (intro_points_ready &&
(service->next_upload_time < now ||
(service->desc_is_dirty &&
service->desc_is_dirty < now-rendinitialpostdelay))) {
/* Upload */
```
Because descriptor is dirty for never-yet-uploaded services it always gets uploaded after being stable for `rendinitialpostdelay` seconds. `next_upload_time` is further in future than `rendinitialpostdelay` stabilization stuff.
So it goes.
I made a patch to unbork this function to work properly.
But it raised a problem. We got used to expect that descriptors are going to be uploaded pretty soon. But 'now' they will be uploaded with delay up to 2h. That's not okay. Should we make a `torrc` option like `RevealOnionServiceStartupTime` that defaults to 1?Tor: 0.3.0.x-finaltwimtwimhttps://gitlab.torproject.org/tpo/core/tor/-/issues/20132Let large client deployments use a local directory cache2020-06-27T13:58:12ZteorLet large client deployments use a local directory cacheOne of the things that concerns me about large tor client farms is that they download a ~1.5MB consensus per client per hour.
This is a particular concern with large deployments of bridges, hidden services (particularly with OnionBalanc...One of the things that concerns me about large tor client farms is that they download a ~1.5MB consensus per client per hour.
This is a particular concern with large deployments of bridges, hidden services (particularly with OnionBalance and/or single onion services), and Tor2web.
One way to work around this issue is to set up a number of local Tor directory caches (unadvertised relays) on the machines hosting the Tor client instances. Then the clients can be told to use these directory caches to retrieve their directory documents.
Ideally, each client should be configured with a few caches in the same data center, just in case one goes down.
It would really help to have a client option for this in Tor, but there is a tradeoff - compromise that relay, and you own all the clients.
For Tor2web and Single Onion Services, this almost works already using EntryNodes, but we disable EntryGuards in order to turn off path bias detection. Also, Single Onion Services use 3-hop paths for HSDir posts, and we want Tor2web to use 3-hop paths for HSDir fetches to avoid denial of service (legacy/trac#20104).Tor: unspecifiedhttps://gitlab.torproject.org/tpo/core/tor/-/issues/20082Lower initial descriptor upload delay for hidden services2020-08-17T14:01:14ZtwimLower initial descriptor upload delay for hidden servicesAt the moment descriptor is getting posted at MIN_REND_INITIAL_POST_DELAY (30) seconds after onion service initialization.
For the use case of real-time one-time services (like OnionShare, etc) one has to wait for 30 seconds until this o...At the moment descriptor is getting posted at MIN_REND_INITIAL_POST_DELAY (30) seconds after onion service initialization.
For the use case of real-time one-time services (like OnionShare, etc) one has to wait for 30 seconds until this onion service can be reached. Besides, if a client tries to reach the service before its descriptor is ever published, tor client gets stuck preventing user from reaching this service after descriptor is published. Like this:
` Could not pick one of the responsible hidden service directories to fetch descriptors, because we already tried them all unsuccessfully. `
I propose to lower MIN_REND_INITIAL_POST_DELAY to 3-5 secs for ephemeral services. It seems to be enough for one-shot services to stabilize.
Not sure if it's really bad to do so - tell me if it is. If it's not good idea to make such short delay for all ephemeral services, we can pass this delay as a parameter for ADD_ONION command so that applications which need low delay can tune it.
Please see a patch below for making this delay as short as 3 seconds for ephemeral services.https://gitlab.torproject.org/tpo/core/tor/-/issues/19859Expose stream isolation information to controllers2020-07-24T18:11:20ZNick MathewsonExpose stream isolation information to controllersSee the discussion on the "How to integrate an external name resolver into Tor" thread on tor-dev; most notably http://archives.seul.org/tor/dev/Aug-2016/msg00019.html .
Resolvers would like to know the isolation information of incoming...See the discussion on the "How to integrate an external name resolver into Tor" thread on tor-dev; most notably http://archives.seul.org/tor/dev/Aug-2016/msg00019.html .
Resolvers would like to know the isolation information of incoming streams so they know which streams need to be isolated from which other streams.
Semantically, this is a little tricky. The underlying rule that Tor implements is that each stream has a tuple of attributes (A_1, A_2... A_n), and a bit field (b_1, b_2... b_n). Two streams S_a and S_b may share the same circuit iff, for every i such that the OR of their b_i values is true, they have the same A_i value.
Note that this is not transitive: Stream S_a may be able to share a circuit with S_b or S_c, even if S_b cannot share with S_c. Worse
Should we (1) expose these attribute tuples and bitfields and require controllers to manipulate them correctly? That seems obnoxious and error-prone.
Or should we (2) allow controllers to ask questions like "may stream A share a circuit with stream B?" Or "what streams may A share a circuit with?" This might lead to O(n) queries, and it will still be error-prone because of the non-transitivity issue.
Or would it be better to (3) oversimplify the system above and provide each stream a 'cookie' such that any two streams with the same cookie may definitely share the same circuit? But this is problematic, and will overestimate how much isolation we need.
My current best idea is that (4) we should provide an operation of the form "make stream A have the same isolation properties as stream B". And possibly "make circuit C have isolation properties as if it had been used by stream A". So we don't expose isolation information, we just expose a way to manipulate it.
Or maybe there's a further clever way I'm not even thinking about just now.Tor: 0.4.3.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/19162Make it even harder to become HSDir2023-03-13T09:57:24ZGeorge KadianakisMake it even harder to become HSDirIn legacy/trac#8243 we started requiring `Stable` flag for becoming HSDirs, but this is still not hard enough for motivated adversaries. Hence we need to make it even harder for a relay to become HSDir, so that only relays that have been...In legacy/trac#8243 we started requiring `Stable` flag for becoming HSDirs, but this is still not hard enough for motivated adversaries. Hence we need to make it even harder for a relay to become HSDir, so that only relays that have been around for long get the flag. After prop224 gets deployed, there will be less incentive for adversaries to become HSDirs since they won't be able to harvest onion addresses.
Until then, our current plan is to increase the bandwidth and uptime required to become an HSDir to something almost unreasonable. For example requiring an uptime of over 6 months, or maybe requiring that the relay is in the top 1/4th of uptimes on the network.Tor: unspecifiedRoger DingledineRoger Dingledinehttps://gitlab.torproject.org/tpo/core/tor/-/issues/19069When DisableNetwork is 1 but !circuit_enough_testing_circs(), we can still la...2022-06-24T16:12:14ZRoger DingledineWhen DisableNetwork is 1 but !circuit_enough_testing_circs(), we can still launch circuitsIn consider_testing_reachability(), we check
```
if (test_or && (!orport_reachable || !circuit_enough_testing_circs())) {
```
Once legacy/trac#18616 is merged, the first function will return 1 for orport_reachable when DisableNetwork ...In consider_testing_reachability(), we check
```
if (test_or && (!orport_reachable || !circuit_enough_testing_circs())) {
```
Once legacy/trac#18616 is merged, the first function will return 1 for orport_reachable when DisableNetwork is 1, so that bug will go away.
But it will remain the case that if !circuit_enough_testing_circs(), we will proceed to call circuit_launch_by_extend_info(), even when DisableNetwork is 1.
There are four places that call consider_testing_reachability():
* circuitbuild.c:circuit_send_next_onion_skin()
* circuituse.c:circuit_testing_opened()
* main.c:directory_info_has_arrived()
* main.c:check_for_reachability_bw_callback()
I think the middle two are safe, since they shouldn't happen while DisableNetwork is set.
I think the first one is iffy, since it's called from a bunch of places so I'm not sure, but given the name I hope it doesn't get called during DisableNetwork.
And I think the fourth one is bad news, since it gets called periodically and doesn't check DisableNetwork.https://gitlab.torproject.org/tpo/core/tor/-/issues/18795diversity weighting system2020-07-27T19:11:53ZTracdiversity weighting systemThis idea came up by thinking about how to prefer picking fallbacks that are located in less famous areas to increase diversity of dir-mirror/Tor traffic. (legacy/trac#18749) Diversity could be honored by a higher diversity-weight if rel...This idea came up by thinking about how to prefer picking fallbacks that are located in less famous areas to increase diversity of dir-mirror/Tor traffic. (legacy/trac#18749) Diversity could be honored by a higher diversity-weight if relays are located in countrys with rare Tor usage. Same with with unique adressrange, fameless ASes or not widely used OSes. Something like a diversity weight system could probably take place not just for choosing/rating fallbacks. Data like "Top-10 countries by directly connecting users" on metrics-page could be used for instance to give lesser chance of leading traffic to countys with high amount of mean daily users. Or to increase the chance of picking a circuit through a nonfamous AS. Could this be someting?
**Trac**:
**Username**: tscpdTor: unspecifiedhttps://gitlab.torproject.org/tpo/core/tor/-/issues/18346Separate the various roles that directory authorities play, from a configurat...2022-10-11T23:40:17ZNick MathewsonSeparate the various roles that directory authorities play, from a configuration POVIt would be handy if the following roles were split up:
1) The list of IP:Orport:Identity to which every relay should upload every descriptor.
2) The list of IP:Orport:Identity from which caches should expect to find canonical conse...It would be handy if the following roles were split up:
1) The list of IP:Orport:Identity to which every relay should upload every descriptor.
2) The list of IP:Orport:Identity from which caches should expect to find canonical consensuses and descriptors.
3) The list of IP:Orport:Identity from which non-caches should expect to bootstrap consensuses and descriptors. (See 'fallbackdir')
4) The list of keys that must sign a vote or a consensus.
5) The list of IP:Orport:Identity that authorities use when sending and receiving votes.
Splitting roles up in this way would better prepare us for an implementation of prop#257 down the road.Tor: 0.4.7.x-freezeNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/tpo/core/tor/-/issues/18307Update RSOS Proposal (260)2020-06-27T13:59:36ZteorUpdate RSOS Proposal (260)See Roger's email and my reply for updates for the RSOS proposal.
https://lists.torproject.org/pipermail/tor-dev/2016-February/010401.htmlSee Roger's email and my reply for updates for the RSOS proposal.
https://lists.torproject.org/pipermail/tor-dev/2016-February/010401.htmlTor: 0.3.2.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/18142Anti-Automated-Scanning: Support "marking" with iptables TCP connections diff...2022-06-16T18:13:01ZnaifAnti-Automated-Scanning: Support "marking" with iptables TCP connections differently "for each circuits"This ticket is to support "marking" with iptables TCP connections differently "for each circuits".
The basic idea is that a Tor Exit operator, in order to reduce automated scanning, may wish to apply specific rate limiters available fro...This ticket is to support "marking" with iptables TCP connections differently "for each circuits".
The basic idea is that a Tor Exit operator, in order to reduce automated scanning, may wish to apply specific rate limiters available from the iptables stack of his linux machine.
The usual Tor connection pattern of an automated scan, from a Tor Exit relay point of view, is that from a single circuit there are a lot of TCP connections going out to the same host within a relatively short amount of time.
The usual HTTP(S) connection pattern of normal Browser, from a Tor Exit relay point of view, is to open a bunch of connection to the same IP and keep those open with keep-alive.
So, if Tor software would made available to Iptables stack the "individual marking" of all TCP connections coming out of a specfic circuit, it would be possible for the Tor Exit operator to apply rate limiting finely tuned in a way not to break normal end-user browsing but to break automated scanner efficiency.
Obviously, that works against automated scanners that does not apply a specific technique to bypass this specific prevention technique, that shall be considered most of the automated scanners.