Trac issueshttps://gitlab.torproject.org/legacy/trac/-/issues2020-06-13T15:41:47Zhttps://gitlab.torproject.org/legacy/trac/-/issues/30638Test banning ed25519 keys in the approved-routers file on moria12020-06-13T15:41:47ZteorTest banning ed25519 keys in the approved-routers file on moria1After #22029 merges to master, we should test that we can ban ed25519 keys on the public tor network.
We should email arma after the merge, and close this ticket once he confirms that the feature works.After #22029 merges to master, we should test that we can ban ed25519 keys on the public tor network.
We should email arma after the merge, and close this ticket once he confirms that the feature works.Tor: unspecifiedDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/legacy/trac/-/issues/30487dirmngr goes berserk making tor requests after gpg --recv-key attempt ends2020-06-13T15:41:31ZRoger Dingledinedirmngr goes berserk making tor requests after gpg --recv-key attempt endsI'm not sure where we should actually file this ticket, but I'm going to start here so I can get my logs up somewhere before they disappear.
I run Debian, and have the single line "use-tor" in my ~/.gnupg/dirmngr.conf.
I unslept my lap...I'm not sure where we should actually file this ticket, but I'm going to start here so I can get my logs up somewhere before they disappear.
I run Debian, and have the single line "use-tor" in my ~/.gnupg/dirmngr.conf.
I unslept my laptop recently, and did a
```
torify gpg --recv-key ...
```
which hung. Eventually I ctrl-C'ed it.
Later, I realized that my Tor was working really hard to make connections. Here is a little snippet from 'setevents circ stream orconn'
```
650 STREAM 2394 CLOSED 8 8.8.8.8:53 REASON=DONE
650 STREAM 2398 NEW 0 [2001:610:1:40cc::9164:b9e5]:11371 SOURCE_ADDR=127.0.0.1:54162 PURPOSE=USER
650 STREAM 2398 SENTCONNECT 10 [2001:610:1:40cc::9164:b9e5]:11371
650 STREAM 2397 CLOSED 8 8.8.8.8:53 REASON=DONE
650 STREAM 2395 CLOSED 8 8.8.8.8:53 REASON=DONE
650 STREAM 2399 NEW 0 [2001:610:1:40cc::9164:b9e5]:11371 SOURCE_ADDR=127.0.0.1:54164 PURPOSE=USER
650 STREAM 2399 SENTCONNECT 10 [2001:610:1:40cc::9164:b9e5]:11371
650 STREAM 2398 REMAP 10 [2001:610:1:40cc::9164:b9e5]:11371 SOURCE=EXIT
650 STREAM 2398 SUCCEEDED 10 [2001:610:1:40cc::9164:b9e5]:11371
650 STREAM 2399 REMAP 10 [2001:610:1:40cc::9164:b9e5]:11371 SOURCE=EXIT
650 STREAM 2399 SUCCEEDED 10 [2001:610:1:40cc::9164:b9e5]:11371
650 STREAM 2398 CLOSED 10 [2001:610:1:40cc::9164:b9e5]:11371 REASON=END REMOTE_REASON=DONE
650 STREAM 2400 NEW 0 8.8.8.8:53 SOURCE_ADDR=127.0.0.1:54166 PURPOSE=USER
650 STREAM 2400 SENTCONNECT 8 8.8.8.8:53
650 STREAM 2399 CLOSED 10 [2001:610:1:40cc::9164:b9e5]:11371 REASON=END REMOTE_REASON=DONE
650 STREAM 2401 NEW 0 8.8.8.8:53 SOURCE_ADDR=127.0.0.1:54168 PURPOSE=USER
650 STREAM 2401 SENTCONNECT 8 8.8.8.8:53
650 STREAM 2400 REMAP 8 8.8.8.8:53 SOURCE=EXIT
650 STREAM 2400 SUCCEEDED 8 8.8.8.8:53
```
These were just streaming by. You can tell from the streamid of 2400 that it had made many many streams already.
```
$ netstat -aen|grep 9050|wc -l
260
```
"lsof|grep 9050" told me it was dirmngr making the connections.
I kill -9'ed dirmngr and the stream requests stopped.
That can't have been good for the Tor network. Especially if we have even a small pile of people with this buggy berserk dirmngr hammering the network nonstop forever.
It seems like we might want to track down the poor decision making choices inside dirmngr, for the good of our network.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/30420Should we recommend that relay operators turn on tcp bbr?2020-06-13T15:41:22ZRoger DingledineShould we recommend that relay operators turn on tcp bbr?The internet seems to have a growing number of howto's for switching your kernel to use the "bbr" congestion control mode of tcp:
https://github.com/google/bbr
https://en.wikipedia.org/wiki/TCP_congestion_control#TCP_BBR
Thought 1: doin...The internet seems to have a growing number of howto's for switching your kernel to use the "bbr" congestion control mode of tcp:
https://github.com/google/bbr
https://en.wikipedia.org/wiki/TCP_congestion_control#TCP_BBR
Thought 1: doing an experiment where various fractions of Tor relays switch to this congestion control mode would be neat. Maybe it's the sort of thing that Shadow could help with, since switching the real Tor network is both cumbersome and dangerous.
(Though, since Shadow builds its own tcp implementation, it would need to have an implementation of the bbr variation in order to do a test with it. And it would need to have realistic *non* Tor background flows to test the comparison. What a great use case for driving forward Shadow innovation to be able to capture this test. Cc'ing Rob.)
Thought 2: If God wanted us to be using tcp bbr, we'd be using it by default already. And we're not, so we should learn why that is. For example, the wikipedia page indicates that it's not good at fairness in some situations -- and since Tor relays are often guests on their network, we might not want to give people more reasons to get angry at them.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/28969Onion Service v3 connection status update event2020-06-13T15:36:06Zrl1987Onion Service v3 connection status update eventTor control protocol should allow user to subscribe to event(s) that let user know about timing/success/failure/error conditions of every step described in chapter 3 and 4 of rend-spec-v3.txt.
We need to do some design work to figure ou...Tor control protocol should allow user to subscribe to event(s) that let user know about timing/success/failure/error conditions of every step described in chapter 3 and 4 of rend-spec-v3.txt.
We need to do some design work to figure out how exactly this should work and what exact info should be transmitted to controller.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/28968Onion Service v2 connection status update event2020-06-13T15:36:06Zrl1987Onion Service v2 connection status update eventTor control protocol should allow user to subscribe to event(s) that let user know about timing/success/failure/error conditions of every step described in sections 1.6-1.12 of rend-spec-v2.txt.
We need to do some design work to figure ...Tor control protocol should allow user to subscribe to event(s) that let user know about timing/success/failure/error conditions of every step described in sections 1.6-1.12 of rend-spec-v2.txt.
We need to do some design work to figure out how exactly this should work and what exact info should be transmitted to controller.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/28967Tor control command to connect to Onion Service2020-06-13T15:36:06Zrl1987Tor control command to connect to Onion ServiceShould support v2 and v3 onion services.Should support v2 and v3 onion services.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/28860Increased DNS failure rate when using ServerDNSResolvConfFile with tor 0.3.4....2020-06-13T15:35:44ZnusenuIncreased DNS failure rate when using ServerDNSResolvConfFile with tor 0.3.4.9 (as opposed to 0.3.3.x)A major tor exit relay operator reports that
after upgrading from 0.3.3.x to 0.3.4.9 his DNS failure rate significantly increased.
He also reported that he is observing DNS issues only when using ServerDNSResolvConfFile and no problems...A major tor exit relay operator reports that
after upgrading from 0.3.3.x to 0.3.4.9 his DNS failure rate significantly increased.
He also reported that he is observing DNS issues only when using ServerDNSResolvConfFile and no problems when not using that config option.
Using that option worked fine on tor 0.3.3.x
With ServerDNSResolvConfFile option on 0.3.4.9:
```
Dec 13 02:39:52.000 [notice] eventdns: Nameserver 8.8.8.8:53 is back up
Dec 13 02:39:53.000 [warn] eventdns: All nameservers have failed
```Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/28841Write tool for onion service health assessment2020-06-13T15:35:37ZGeorge KadianakisWrite tool for onion service health assessmentWe've been getting lots of reports about bad reachability of onion services (e.g. #28730) and in particular the v3 ones.
We need a tool that we can use to evaluate and monitor the health of onion services. We should use it to verify how...We've been getting lots of reports about bad reachability of onion services (e.g. #28730) and in particular the v3 ones.
We need a tool that we can use to evaluate and monitor the health of onion services. We should use it to verify how reachable and stable onions are, and also as a benchmark for how their stability changes over time.
A relevant ticket here is #13209 which we can leverage in the future.
One way to write such a tool is to provide it with an onion service, and the tool fetches its desc from every HSDir, then introduces itself to all the intro points, and make sure that rendezvous can occur. Then it monitors this over time to find issues with reachability.Tor: unspecifiedDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/legacy/trac/-/issues/26930Potential thundering herd HS upload once a day with hsv32020-06-13T15:30:05ZGeorge KadianakisPotential thundering herd HS upload once a day with hsv3Here is a potential bug we found with dgoulet at HOPE.
In `set_rotation_time()` we set the rotation time of the next descriptor to the next midnight UTC with no randomization at all:
```
service->state.next_rotation_time =
sr_stat...Here is a potential bug we found with dgoulet at HOPE.
In `set_rotation_time()` we set the rotation time of the next descriptor to the next midnight UTC with no randomization at all:
```
service->state.next_rotation_time =
sr_state_get_start_time_of_current_protocol_run() +
sr_state_get_protocol_run_duration();
```
After descriptors get rotated, a new descriptor is built and uploaded, so it might be the case that all hsv3s upload that new descriptor at the same time at midnight UTC.
We should investigate more.Tor: 0.3.5.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/26769We should make HSv3 desc upload less frequent2020-06-13T15:29:32ZGeorge KadianakisWe should make HSv3 desc upload less frequentWithout checking the source code right now, HSDirs are supposed to cache HS descriptors for the inscribed lifetime (3 hours), and HSv3s are supposed to upload descriptors at a random time between 1 and 2 hours (see `HS_SERVICE_NEXT_UPLOA...Without checking the source code right now, HSDirs are supposed to cache HS descriptors for the inscribed lifetime (3 hours), and HSv3s are supposed to upload descriptors at a random time between 1 and 2 hours (see `HS_SERVICE_NEXT_UPLOAD_TIME_MIN`).
This makes HSv3s upload descriptors more frequently than needed. For example, we could increase this to upload descriptors between 2 and 2.9 hours, to make HSv3s less intense on the network.
Someone should double check the above logic and make sure it won't cause issues, and implement it.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/26691add 'working DNS' to the list of mandatory requirements for the 'exit' flag2020-06-13T15:27:42Znusenuadd 'working DNS' to the list of mandatory requirements for the 'exit' flagcurrent requirements for the exit flag as per the spec:
https://gitweb.torproject.org/torspec.git/tree/dir-spec.txt#n2524
> "Exit" -- A router is called an 'Exit' iff it allows exits to at
> least one /8 address space on each of p...current requirements for the exit flag as per the spec:
https://gitweb.torproject.org/torspec.git/tree/dir-spec.txt#n2524
> "Exit" -- A router is called an 'Exit' iff it allows exits to at
> least one /8 address space on each of ports 80 and 443. (Up until
> Tor version 0.3.2, the flag was assigned if relays exit to at least
> two of the ports 80, 443, and 6667.)
Recently the requirements for the exit flag have been changed to make 80+443 mandatory because exits only allowing 80 OR 443 would introduce to much breakage, the same is true for exits not able to resolve any DNS requests, their usefulness as an exit is limited.
https://gitweb.torproject.org/torspec.git/tree/dir-spec.txt#n2280
> "Exit" if the router is more **useful** for building
> general-purpose exit circuits than for relay circuits.
So lets add the DNS requirement to the list of requirements for the exit flag.
The requirement should be automatically verified by dir auths by attempting DNS resolution for each exit candidate up to 5 times a day. If more than 2 resolution attempts fail the 'working DNS' requirement is not met. After 3 successful attempts no further attempts are necessary for that day.
Relays loosing the exit flag have a chance to regain it after being tested the next day again.
https://arthuredelstein.net/exits/Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/26094increase minimal bandwidth requirements, update the manpage, relay guide and FAQ2020-06-13T15:25:39Znusenuincrease minimal bandwidth requirements, update the manpage, relay guide and FAQ
The man page has old minimal bandwidth requirements, let's update them to be consistent with the relay guide and the FAQ
man page currently says: https://www.torproject.org/docs/tor-manual-dev.html.en#BandwidthRate
"
If you want to r...
The man page has old minimal bandwidth requirements, let's update them to be consistent with the relay guide and the FAQ
man page currently says: https://www.torproject.org/docs/tor-manual-dev.html.en#BandwidthRate
"
If you want to run a relay in the public network, this needs to be at the very least 75 KBytes for a relay (that is, 600 kbits) or 50 KBytes for a bridge (400 kbits) but of course, more is better; we recommend at least 250 KBytes (2 mbits) if possible.
"
Proposed version for the man page:
"
If you want to run a relay in the public network, this needs to be at the very least 8 MBit/s (Mbps) for a relay or 1 MBit/s for a bridge but of course, more is better; we recommend at least 16 MBit/s if possible.
"
FAQ values:
https://www.torproject.org/docs/faq.html.en#HowDoIDecide
relay guide:
https://trac.torproject.org/projects/tor/wiki/TorRelayGuide#BandwidthandConnections
context:
https://lists.torproject.org/pipermail/tor-dev/2018-February/012916.htmlTor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/25170explicitly mention email address to contact for rejected relays2020-06-13T15:21:43Zcypherpunksexplicitly mention email address to contact for rejected relaysfrom the discussion on bad-relays@..
When we reject relays the operator gets to see something like this in their logs:
```
[warn] http status 400 ("Fingerprint is marked rejected -- please contact us?") response from dirserver 'IP:Port...from the discussion on bad-relays@..
When we reject relays the operator gets to see something like this in their logs:
```
[warn] http status 400 ("Fingerprint is marked rejected -- please contact us?") response from dirserver 'IP:Port'. Please correct.
```
Lets change this to:
"Fingerprint is marked rejected, if you think this is a mistake please set a ContactInfo and send an email to bad-relays@lists.torproject.org mentioning your fingerprint(s)"Tor: 0.3.2.x-finalDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/legacy/trac/-/issues/25110Warn operators who set MyFamily that they must also set ContactInfo2020-06-13T15:21:26ZteorWarn operators who set MyFamily that they must also set ContactInfoWe added this advice to the man page in ~~0.3.2.9~~ 0.3.2.10, but a config warning could help, too.
The logic is:
If an operator sets MyFamily, and does not set ContactInfo:
* warn the operator that they should set ContactInfoWe added this advice to the man page in ~~0.3.2.9~~ 0.3.2.10, but a config warning could help, too.
The logic is:
If an operator sets MyFamily, and does not set ContactInfo:
* warn the operator that they should set ContactInfoTor: 0.4.1.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/24526Make it clear that multi-relay operators are expected to set a working Contac...2020-06-13T15:18:25ZcypherpunksMake it clear that multi-relay operators are expected to set a working ContactInfo and proper MyFamilyAs per discussion with David on bad-relays@ I'm opening this ticket as he requested.
We want to make it clear to tor relay operators that setting a proper ContactInfo (working email address) and MyFamily (fully mutual configuration) is ...As per discussion with David on bad-relays@ I'm opening this ticket as he requested.
We want to make it clear to tor relay operators that setting a proper ContactInfo (working email address) and MyFamily (fully mutual configuration) is strongly encouraged (required?) for relay operators that run more than 3 (?) tor instances, relays showing up without such configuration likely raise a red flag and might get rejected from the network.
places to update:
* manual page:
* ContactInfo
* MyFamily
* relay documentation (#24497)Tor: 0.3.3.x-finalNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/legacy/trac/-/issues/24014Make exits check DNS periodically, and disable exit traffic if it fails2020-06-13T15:16:24ZteorMake exits check DNS periodically, and disable exit traffic if it failsCurrently exits check once at startup, which doesn't detect overloaded DNS servers once the exit receives significant traffic.
See #21394 for details,
Since this is a new feature, it is not going to make it into 0.3.2.Currently exits check once at startup, which doesn't detect overloaded DNS servers once the exit receives significant traffic.
See #21394 for details,
Since this is a new feature, it is not going to make it into 0.3.2.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/20055Remove relays that fail to rotate onion keys from the consensus2020-06-13T15:04:23ZteorRemove relays that fail to rotate onion keys from the consensusOn #7164, a cypherpunks notes that ~40 relays fail to rotate their onion keys. This should be addressed by identifying these relays, and adding them to the DirAuths' AuthDirInvalid or AuthDirReject lists.
First, we need to update torspe...On #7164, a cypherpunks notes that ~40 relays fail to rotate their onion keys. This should be addressed by identifying these relays, and adding them to the DirAuths' AuthDirInvalid or AuthDirReject lists.
First, we need to update torspec/dir-spec.txt to say that relays SHOULD rotate their onion keys every 7 days, and MUST rotate them every N days. (I suggest 14 or 28.)
Then we can modify DocTor to check for relays in the consensus that have had the same onion key for N days. (I think DocTor is the right place for this check.)
This won't catch cases where relays repeat onion keys, but it will suffice to catch the most obvious misconfiguration - a read-only onion key file.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/20969Detect relays that don't update their onion keys every 7 days.2020-06-13T15:04:23ZDavid Gouletdgoulet@torproject.orgDetect relays that don't update their onion keys every 7 days.This is related to #20055 which would be an important thing to monitor for the health and security of the network.
There are multiple things here that can be or should be checked.
The `onion-key` field is an RSA key so DocTor will need...This is related to #20055 which would be an important thing to monitor for the health and security of the network.
There are multiple things here that can be or should be checked.
The `onion-key` field is an RSA key so DocTor will need to keep a persistent database of those over time (only used for TAP handshake).
The `ntor-onion-key` field also can be monitored the same as the RSA key.
If the `ntor-onion-key-crosscert` field is present, you'll get a timestamp for free in the certificate which should have the `exp_field` set to the last published time + 7 days.
In any case, a router SHOULD NOT have either a TAP or ntor onion key _more_ than 7 days as this is hardcoded in Tor. If they do, it could be another implementation but finding them would be good so we can warn/ask them to fix. Or better, detect bugs as well on tor implementation that could keep those for a longer time.Georg KoppenGeorg Koppenhttps://gitlab.torproject.org/legacy/trac/-/issues/19610IPv6-only clients fetch microdescriptors from a small number of IPv6 fallbacks2020-06-13T14:59:22ZteorIPv6-only clients fetch microdescriptors from a small number of IPv6 fallbacksWhen an IPv6-only client bootstraps using microdescriptors (#19608), it fetches the microdescriptor consensus from an IPv6 fallback, but the microdescriptor consensus has no IPv6 addresses.
So it falls back to the fallback directories, ...When an IPv6-only client bootstraps using microdescriptors (#19608), it fetches the microdescriptor consensus from an IPv6 fallback, but the microdescriptor consensus has no IPv6 addresses.
So it falls back to the fallback directories, fetching ~7500/500 = 15 sets of descriptors from 15 of the 25 IPv6 fallbacks.
We should improve this behaviour somehow, to avoid overloading the fallbacks. One simple way of doing this is selecting 200 fallbacks for 0.2.9 in #18828.
It's worth noting that this extra load only happens on bootstrap, when there are no cached microdescriptors. If an IPv6-only client has any IPv6 microdescriptors that match the current consensus, it will use those relays instead.Tor: 0.3.3.x-finalteorteorhttps://gitlab.torproject.org/legacy/trac/-/issues/19068Write and run a clique reachability test.2020-06-13T14:57:29ZYawning AngelWrite and run a clique reachability test.It would be useful to know what the full inter-relay connectivity graph looks like, and how far it differs from the "every relay can always reach every other relay" ideal.
https://www.sba-research.org/wp-content/uploads/publications/Nav...It would be useful to know what the full inter-relay connectivity graph looks like, and how far it differs from the "every relay can always reach every other relay" ideal.
https://www.sba-research.org/wp-content/uploads/publications/NavigaTor_preprint.pdf
This should be something doable with stem, and ideally we can run it periodically/automatically, and use it to do things like reject relays that have extremely poor connectivity.Tor: unspecified