Trac issueshttps://gitlab.torproject.org/legacy/trac/-/issues2020-06-13T14:02:42Zhttps://gitlab.torproject.org/legacy/trac/-/issues/1102Queuing v3 signature for next consensus, an hour later?2020-06-13T14:02:42ZRoger DingledineQueuing v3 signature for next consensus, an hour later?On moria1, which I started at
Sep 21 01:51:04.434 (after some parts of the consensus generation were
supposed to start)
Sep 21 01:51:47.809 [notice] Uploaded a vote to dirserver 128.31.0.34:9031
Sep 21 01:51:47.832 [notice] Uploaded a v...On moria1, which I started at
Sep 21 01:51:04.434 (after some parts of the consensus generation were
supposed to start)
Sep 21 01:51:47.809 [notice] Uploaded a vote to dirserver 128.31.0.34:9031
Sep 21 01:51:47.832 [notice] Uploaded a vote to dirserver 216.224.124.114:9030
Sep 21 01:51:47.833 [notice] Uploaded a vote to dirserver 208.83.223.34:443
Sep 21 01:51:48.045 [notice] Uploaded a vote to dirserver 86.59.21.38:80
Sep 21 01:51:48.311 [notice] Uploaded a vote to dirserver 194.109.206.212:80
Sep 21 01:51:49.618 [notice] Uploaded a vote to dirserver 213.73.91.31:80
Sep 21 01:51:49.662 [notice] Uploaded a vote to dirserver 80.190.246.100:80
...
Sep 21 01:52:31.466 [notice] Time to fetch any votes that we're missing.
Sep 21 01:52:31.466 [notice] We're missing votes from 6 authorities. Asking every other authority for a copy.
...
Sep 21 01:55:01.379 [notice] Time to compute a consensus.
Sep 21 01:55:01.586 [notice] Consensus computed; uploading signature(s)
Sep 21 01:55:01.587 [notice] Signature(s) posted.
Sep 21 01:55:01.611 [notice] Got a signature from 128.31.0.34. Adding it to the pending consensus.
Sep 21 01:55:01.612 [notice] Uploaded signature(s) to dirserver 128.31.0.34:9031
Sep 21 01:55:01.763 [notice] Uploaded signature(s) to dirserver 216.224.124.114:9030
Sep 21 01:55:01.770 [notice] Uploaded signature(s) to dirserver 208.83.223.34:443
Sep 21 01:55:01.846 [notice] Uploaded signature(s) to dirserver 86.59.21.38:80
Sep 21 01:55:01.854 [notice] Got a signature from 86.59.21.38. Adding it to the pending consensus.
Sep 21 01:55:01.930 [notice] Uploaded signature(s) to dirserver 194.109.206.212:80
Sep 21 01:55:01.934 [notice] Got a signature from 194.109.206.212. Adding it to the pending consensus.
Sep 21 01:55:02.827 [notice] Got a signature from 208.83.223.34. Adding it to the pending consensus.
Sep 21 01:55:02.869 [notice] Got a signature from 216.224.124.114. Adding it to the pending consensus.
Sep 21 01:55:05.121 [notice] Got a signature from 213.73.91.31. Adding it to the pending consensus.
Sep 21 01:55:05.675 [notice] Uploaded signature(s) to dirserver 213.73.91.31:80
Sep 21 01:55:08.879 [notice] Got a signature from 80.190.246.100. Adding it to the pending consensus.
Sep 21 01:55:09.307 [notice] Uploaded signature(s) to dirserver 80.190.246.100:80
Sep 21 01:57:31.840 [notice] Time to fetch any signatures that we're missing.
Sep 21 02:00:01.204 [notice] Time to publish the consensus and discard old votes
Sep 21 02:00:01.231 [notice] Choosing expected valid-after time as 2009-09-21 07:00:00: consensus_set=1, interval=3600
Sep 21 02:00:01.300 [notice] Consensus published.
Sep 21 02:00:01.301 [notice] Choosing expected valid-after time as 2009-09-21 07:00:00: consensus_set=1, interval=3600
Sep 21 02:00:09.474 [notice] Got a signature from 38.229.70.2. Queuing it for the next consensus.
It's that last line that concerns me. Queuing for the next consensus that's 59 minutes
and 50 seconds from now? Shouldn't we either be adding it to the current consensus even
though it's late, or discarding it because it's late?
(Note that this isn't from an authority that moria1 recognizes)
[Automatically added by flyspray2trac: Operating System: All]Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/1116'Stable' flag assignment inconsistant2020-06-13T14:02:47ZTom Lowenthal'Stable' flag assignment inconsistantLooking at a consensus document [though I used torstatus.all.de for ease of sorting data] it seems that the 'stable' flag
is not being consistently assigned.
According to the v3 directory specification at https://git.torproject.org/che...Looking at a consensus document [though I used torstatus.all.de for ease of sorting data] it seems that the 'stable' flag
is not being consistently assigned.
According to the v3 directory specification at https://git.torproject.org/checkout/tor/master/doc/spec/dir-spec.txt ,
routers with a weighted MTBF more than either the median or seven days should be marked stable, and MTBF data more
than a month old shouldn't be that relevant when assigning the flag. Since the median uptime is about 3 days, one should
roughly expect that any router with more than 30 days of uptime (and which are still valid) should have the stable flag.
However when relays are sorted in order of uptime, several apparently-longrunning routers do not have the flag.
Since this data is liable to change as relays go up an down, here are some noted not-'stable' routers at the time of
writing. The routers have uptimes more than a month, so their (correctly) weighted MTBF should certainly be more than
a week, and more than the median, about three days.
wie6ud6be - 148d
anonymde - 112d
torpfaffenederorg - 110d
rentalsponge - 70d
xhyG5r96QGlRqL - 57d
niugnip - 56d
oeiwuqej - 49d
gremlin - 42d
editingconfigishard - 39d
[Automatically added by flyspray2trac: Operating System: All]Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/1238Exit flag assigned can be assigned to nodes that don't really exit.2020-06-13T14:03:42ZSebastian HahnExit flag assigned can be assigned to nodes that don't really exit.The router b0red is flagged as Exit, even though its Exit policy doesn't allow any exits.
Discovered by "dun" on #tor.
This is currently part of the consensus:
```
r b0red WCi6nB/t0u9ZtGBcrrWFgpXdjlg w+3Dl7l2fnUc0JhSMLchCL7RcjU 2010-0...The router b0red is flagged as Exit, even though its Exit policy doesn't allow any exits.
Discovered by "dun" on #tor.
This is currently part of the consensus:
```
r b0red WCi6nB/t0u9ZtGBcrrWFgpXdjlg w+3Dl7l2fnUc0JhSMLchCL7RcjU 2010-02-02 00:21:48 80.190.250.90 443 80
s Exit Fast Guard HSDir Named Running Stable V2Dir Valid
v Tor 0.2.1.20
w Bandwidth=621
p reject 1-65535
```
descriptor:
```
@downloaded-at 2010-01-31 23:16:54
@source "194.109.206.212"
router b0red 80.190.250.90 443 0 80
platform Tor 0.2.1.20 on Linux i686
opt protocols Link 1 2 Circuit 1
published 2010-01-31 12:20:43
opt fingerprint 5828 BA9C 1FED D2EF 59B4 605C AEB5 8582 95DD 8E58
uptime 5097747
bandwidth 5242880 10485760 261098
opt extra-info-digest 535CE872B386F71E9DEA356B10E63E9D83789F57
onion-key
-----BEGIN RSA PUBLIC KEY-----
MIGJAoGBAM2wCZqUMEgPDdEsVrW1XfHrvqmOT1KYDMupz7h+DA5b56VMPOIyOG57
hKGliyW5gE7B/Qtt5EtasScqAFM+kV9BVXWVshFEF4tu2kWdFS8E4XKVks0NbTUU
2H/l0W/H2KdMy1bUuWyd7s1ftcuodb04Na3U/DS0t26Ta1kADWLZAgMBAAE=
-----END RSA PUBLIC KEY-----
signing-key
-----BEGIN RSA PUBLIC KEY-----
MIGJAoGBANB7P5x+7SON1dd2RkuqjNZaPsSPKoGKIOuq1IwSNDJR8+Y7T7jijgWe
ZKzvieP82XK1eDxKTdXCJbWR1X+V5a5XExt8RNszeslK02bC+Q4wTUtlM7n3319Q
UQrLTp++dVLa0LuNvlbux39tqAqriyn0hWI2JVEbkrp32N4l28SFAgMBAAE=
-----END RSA PUBLIC KEY-----
opt hidden-service-dir
opt allow-single-hop-exits
contact xxoes <xxoes at b0red.de>
reject 0.0.0.0/8:*
reject 169.254.0.0/16:*
reject 127.0.0.0/8:*
reject 192.168.0.0/16:*
reject 10.0.0.0/8:*
reject 172.16.0.0/12:*
reject 80.190.250.90:*
reject *:1-65534
reject *:65535
accept *:*
router-signature
-----BEGIN SIGNATURE-----
SVmtJeKcTUVyaZO8PfKtd0E1yQUR+TffgNo5AAgPOGLdjqmbIpFA2RqsfFqXK2Re
PQ34TxbgMKGxfZKDVXAfeQFVVQgFny8KqAlzDfytFUxOGvdcthHsfg/FJwbPneNU
eiNdn4E+ug8JjOcAKJ7EdfhmIKaWRXAg2NKZKWbNnRQ=
-----END SIGNATURE-----
```Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/1291Relays that aren't Valid never get Running2020-06-13T14:04:01ZRoger DingledineRelays that aren't Valid never get RunningWe use router_is_active() for too many checks when directory authorities are
deciding how to handle relays that don't have the Valid flag.
Once upon a time, you could be missing a Valid flag and still get the Running
flag. That would ca...We use router_is_active() for too many checks when directory authorities are
deciding how to handle relays that don't have the Valid flag.
Once upon a time, you could be missing a Valid flag and still get the Running
flag. That would cause clients to avoid using you except in circuit positions
specified in their 'AllowInvalidRelays' config option.
At present if we take away your Valid flag, we also necessarily take away your
Running flag.
We should sort out what we want to do. I think there is still a role for having
"dangerous" relays -- meaning you don't use them at the beginning or the end of your
path.
Maybe this means we should do away with the 'Valid' flag, and add a !badguard along
with !badexit?
[Automatically added by flyspray2trac: Operating System: All]Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/1690Consensus Bandwidth Lacks Indication of Type2020-06-13T14:05:08ZDamian JohnsonConsensus Bandwidth Lacks Indication of TypeOn the client side there currently isn't a way of telling what type of measurement was used for the bandwidth value. For instance if it reads "w Bandwidth=65700" there's no way to definitively tell if this is observed, measured, or weigh...On the client side there currently isn't a way of telling what type of measurement was used for the bandwidth value. For instance if it reads "w Bandwidth=65700" there's no way to definitively tell if this is observed, measured, or weighted measured.Tor: 0.2.4.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/2282Publish router descriptors rejected by the authorities or omitted from the co...2020-06-13T14:07:47ZRobert RansomPublish router descriptors rejected by the authorities or omitted from the consensusRight now, if a relay is dropped from the consensus, or its descriptor is rejected outright by the directory authorities, we won't find out that it has happened unless someone notices that their relay isn't working and tells us, and we c...Right now, if a relay is dropped from the consensus, or its descriptor is rejected outright by the directory authorities, we won't find out that it has happened unless someone notices that their relay isn't working and tells us, and we can't find out why it happened unless we read the directory authorities' log files.
The directory authorities should:
* archive _all_ descriptors that are published to them, even if they are rejected or not included in the consensus;
* if a descriptor is rejected, record the reason in that archive; and
* if a relay is omitted from the consensus, record the reason in the archive.
The directory authority operators should:
* examine a sample of the descriptors that are not included in the consensus, for whatever reason;
* if the descriptors in the sample do not contain particularly sensitive information, begin publishing these otherwise unpublished descriptors.
Having this information available would make it easier to find relays that were disabled by #2204 and inform their operators that they need to upgrade Tor, for example.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/2473Develop a design to support multiple bridge authorities2020-06-13T14:08:30ZRoger DingledineDevelop a design to support multiple bridge authoritiesThe main thing blocking multiple bridge directory authorities right now is that we don't have a design for how it would work. For the normal directory authority design, we want all of them to know about all relays. But for bridge authori...The main thing blocking multiple bridge directory authorities right now is that we don't have a design for how it would work. For the normal directory authority design, we want all of them to know about all relays. But for bridge authorities, that would defeat the purpose. So we want some algorithms for distributing bridges over authorities, such that bridge users know where to go to look up a given bridge (probably as a function of its identity fingerprint). Perhaps the algorithm should provide stable answers even as we change the set of bridge authorities, and for clients and bridges running a variety of Tor versions. More generally, we need to figure out what functionality we want and what security properties we should shoot for.
Somebody should start with a proposal, and go from there.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/2664DoS and failure resistence improvements2020-06-13T14:09:04ZMike PerryDoS and failure resistence improvementsWe just had a near-catastrophe today when an IPv6 relay descriptor took out all of the Tor directory authorities. It took us ~10hrs to correct this issue. The maximum we had before the network breaks for everyone is 28hrs. We need to con...We just had a near-catastrophe today when an IPv6 relay descriptor took out all of the Tor directory authorities. It took us ~10hrs to correct this issue. The maximum we had before the network breaks for everyone is 28hrs. We need to consider implementing some procedures to both reduce the amount of turnaround time it takes to diagnose and fix cases like this, and also enhance the network's ability to function if we can't bring the authorities back online within 28hrs.
This ticket is the parent ticket for a series of child tickets that have been created to remind us to create actual proposals and procedures.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/2665Create a dirauth DoS response procedure2020-06-13T14:22:29ZMike PerryCreate a dirauth DoS response procedureWe have the technical ability right now to rapidly rotate up to n-1 of the directory authorities to new IP addresses and new intermediate keys, simply by updating torrc files of dirauths. So long as at least one directory authority remai...We have the technical ability right now to rapidly rotate up to n-1 of the directory authorities to new IP addresses and new intermediate keys, simply by updating torrc files of dirauths. So long as at least one directory authority remains listening on its old IP address and is aware of the other directory authorities' new locations, it should still be possible to both produce a consensus and distribute it to new clients.
We should clearly document this procedure so we can execute it quickly if a majority of the Tor directory authorities fall victim to a DoS or compromise.
We should also consider altering client bundles to ship with a reduced consensus or descriptor set of ultra high-uptime directory mirrors, so that in the future we can rotate all n directory authorities without issue.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/2693Design and implement improved algorithm for choosing consensus method2020-06-13T14:09:14ZNick MathewsonDesign and implement improved algorithm for choosing consensus methodOur current algorithm for picking a consenus method is, "Pick the highest method supported by more than 2/3 of the authorities currently voting." This can sometimes result in an insufficiently signed consensus. Instead, it should be so...Our current algorithm for picking a consenus method is, "Pick the highest method supported by more than 2/3 of the authorities currently voting." This can sometimes result in an insufficiently signed consensus. Instead, it should be something like, "Pick the highest method supported by more than 2/3 of the authorities currently voting, UNLESS the number of authorities supporting that method is less than the threshold needed to sign a valid consensus. In that case, pick the highest method supported by enough authorities to sign a valid consensus."
Alternatively, the algorithm could be something like, "Pick the highest method supported by enough authorities to sign a valid consensus", which I believe is mathematically identical to the above (more obviously safe) formulation.
This change would make some attacks harder for a hostile authority, and some attacks easier. It needs a design proposal and some analysis.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/2715Is rephist-calculated uptime the right metric for HSDir assignment?2020-06-13T14:09:22ZRoger DingledineIs rephist-calculated uptime the right metric for HSDir assignment?In #2709 we changed the HSDir flag to be based on each authority's opinion of the relay's uptime, rather than the relay's own opinion of its uptime.
Nick then asked if perhaps WFU would be a better measure. We should consider if there a...In #2709 we changed the HSDir flag to be based on each authority's opinion of the relay's uptime, rather than the relay's own opinion of its uptime.
Nick then asked if perhaps WFU would be a better measure. We should consider if there are smarter parameters to consider.
See also #2714.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/3023Tor directory authorities should not act as regular relays/hsdirs2020-06-13T14:10:06ZSebastian HahnTor directory authorities should not act as regular relays/hsdirsIn the past, it made sense to use directory authorities for all other network functions too, because they provided a significant contribution to the network's available bandwidth. Now that this isn't so anymore, and we're starting to see...In the past, it made sense to use directory authorities for all other network functions too, because they provided a significant contribution to the network's available bandwidth. Now that this isn't so anymore, and we're starting to see more and more bugs where the dirauths also act as relays, we should change that so the dirauths can focus on providing a consensus and bootstrapping functionality.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/3029We should save received documents before parsing them2020-06-13T14:10:08ZNick MathewsonWe should save received documents before parsing themWe should have an option to make Tor save every document it receives from the network before it tries to parse it. That way, if we crash while we're handling the document, we can know what crashed us.
Also, everything that stores an un...We should have an option to make Tor save every document it receives from the network before it tries to parse it. That way, if we crash while we're handling the document, we can know what crashed us.
Also, everything that stores an unparseable/unreadable thingy should be able to save more than one of them.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/3241Seeing lots of "crypto error while reading public key from string" on DA2020-06-13T14:10:45ZLinus Nordberglinus@torproject.orgSeeing lots of "crypto error while reading public key from string" on DAI have about 200 of these (in 20 hours) on my DA:
May 18 21:06:05.183 [warn] crypto error while reading public key from string: too long (in asn1 encoding routines:ASN1_get_object)
May 18 21:06:05.183 [warn] crypto error while reading p...I have about 200 of these (in 20 hours) on my DA:
May 18 21:06:05.183 [warn] crypto error while reading public key from string: too long (in asn1 encoding routines:ASN1_get_object)
May 18 21:06:05.183 [warn] crypto error while reading public key from string: bad object header (in asn1 encoding routines:ASN1_CHECK_TLEN)
May 18 21:06:05.183 [warn] crypto error while reading public key from string: nested asn1 error (in asn1 encoding routines:ASN1_D2I_EX_PRIMITIVE)
May 18 21:06:05.183 [warn] crypto error while reading public key from string: nested asn1 error (in asn1 encoding routines:ASN1_TEMPLATE_NOEXP_D2I)
May 18 21:06:05.183 [warn] crypto error while reading public key from string: ASN1 lib (in PEM routines:PEM_ASN1_read_bio)
May 18 21:06:05.183 [warn] parse error: Couldn't parse public key.
May 18 21:06:05.183 [warn] Error tokenizing router descriptor.
May 18 21:06:05.183 [warn] Error reading extra-info: signature does not match.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/4363Dirauths should save a copy of a consensus that didn't get enough signatures2020-06-13T15:27:07ZSebastian HahnDirauths should save a copy of a consensus that didn't get enough signaturesBasically right now when a dirauth doesn't get the consensus it generated signed, we don't know what kind of consensus that dirauth wanted because it isn't valid (not enough signatures). We could save a copy so we can investigateBasically right now when a dirauth doesn't get the consensus it generated signed, we don't know what kind of consensus that dirauth wanted because it isn't valid (not enough signatures). We could save a copy so we can investigateTor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/4477Relays that are not directory authorities shouldn't load the approved-routers...2020-06-13T14:14:40ZLinus Nordberglinus@torproject.orgRelays that are not directory authorities shouldn't load the approved-routers filedirserv_load_fingerprint_file() is called from do_hup() and from
In do_hup() it's called if
```
authdir_mode_handles_descs(options, -1) != 0
```
In init_keys() it's called if
```
authdir_mode(options) != 0
```
This is inconsiste...dirserv_load_fingerprint_file() is called from do_hup() and from
In do_hup() it's called if
```
authdir_mode_handles_descs(options, -1) != 0
```
In init_keys() it's called if
```
authdir_mode(options) != 0
```
This is inconsistent and at least one of them is wrong. I'm not quite
sure exaclty who needs the fingerprints.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/4539Make dir auths write to disk digests that don't match2020-06-13T15:29:24ZLinus Nordberglinus@torproject.orgMake dir auths write to disk digests that don't matchmaatuska told me this the other day:
```
Nov 05 12:55:02.739 [warn] Unable to store signatures posted by 128.31.0.34: Mismatched digest.
```
And Sebastian had the idea that we should teach directory authorities to save mismatched diges...maatuska told me this the other day:
```
Nov 05 12:55:02.739 [warn] Unable to store signatures posted by 128.31.0.34: Mismatched digest.
```
And Sebastian had the idea that we should teach directory authorities to save mismatched digests to disk so that we can investigate them.
But before that, there was this log entry:
```
Nov 05 12:55:02.737 [warn] http status 400 ("Mismatched digest.") response after uploading signatures to dirserver '128.31.0.34:9131'. Please correct.
and
```
This makes me think that this might not be some local trouble on
maatuska but perhaps related to the communication between the
authorities. Broken TCP connection perhaps?
Adding this option should be easy enough for it to be worth it even if
we'll only find half a digest there or something so I say let's do it.
BTW, #1890 saw quite a few mismatched digests too.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/4581Dir auths should defend themselves from too many begindir requests per address2020-06-13T14:15:17ZRoger DingledineDir auths should defend themselves from too many begindir requests per address#4580 would not have been so bad if we'd had a "you already sent me 5 begindir cells and I haven't even learned what you wanted to request on them yet. I am going to refuse the sixth one." feature.
Alas, the bug causes us to make reques...#4580 would not have been so bad if we'd had a "you already sent me 5 begindir cells and I haven't even learned what you wanted to request on them yet. I am going to refuse the sixth one." feature.
Alas, the bug causes us to make requests over time, and that will cause us to have multiple OR conns open, so the defense cannot simply be "look at how many other streams we have open on this circuit". I guess some sort of map from IP address to count would do it?
I put this as an 0.2.2 milestone, but if the patch is complex I'll probably not be excited about backporting it.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/4626Very high cpu usage for gabelmoo running with renegotiation-limiting code2020-06-13T14:15:36ZSebastian HahnVery high cpu usage for gabelmoo running with renegotiation-limiting codeHey there,
gabelmoo is seeing almost full cpu utilization lately. I'm running openssl1 and libevent master. Traffic is at around 200KB/s, so not very much. Here's a profile for everything over 0.5%:
```
samples % image name ...Hey there,
gabelmoo is seeing almost full cpu utilization lately. I'm running openssl1 and libevent master. Traffic is at around 200KB/s, so not very much. Here's a profile for everything over 0.5%:
```
samples % image name app name symbol name
397332 26.8226 libc.so.6 libc.so.6 /home/karsten/debug/libc.so.6
210739 14.2263 libpthread.so.0 libpthread.so.0 __pthread_mutex_unlock_usercnt
157849 10.6559 libpthread.so.0 libpthread.so.0 pthread_mutex_lock
62969 4.2508 tor tor connection_handle_write
56998 3.8477 tor tor _openssl_locking_cb
44452 3.0008 tor tor assert_connection_ok
38146 2.5751 tor tor connection_bucket_write_limit
37917 2.5597 [vdso] (tgid:17627 range:0x7fffb85ff000-0x7fffb8600000) tor [vdso] (tgid:17627 range:0x7fffb85ff000-0x7fffb8600000)
32683 2.2063 tor tor flush_buf_tls
29224 1.9728 tor tor connection_is_rate_limited
28245 1.9067 tor tor connection_bucket_round_robin
25259 1.7052 tor tor tor_tls_get_error
22309 1.5060 tor tor tor_tls_write
21562 1.4556 tor tor assert_buf_ok
20642 1.3935 tor tor get_options_mutable
19521 1.3178 tor tor approx_time
19272 1.3010 tor tor _check_no_tls_errors
19108 1.2899 tor tor conn_write_callback
18312 1.2362 tor tor tor_addr_is_internal
14932 1.0080 tor tor tor_tls_get_forced_write_size
14237 0.9611 tor tor tor_gettimeofday_cache_clear
12501 0.8439 librt.so.1 librt.so.1 /home/karsten/debug/librt.so.1
11918 0.8045 tor tor tor_mutex_acquire
11907 0.8038 tor tor tor_mutex_release
11376 0.7680 tor tor connection_bucket_refill
9770 0.6595 tor tor connection_is_listener
9582 0.6468 tor tor connection_is_reading
9493 0.6408 tor tor tor_tls_state_changed_callback
9087 0.6134 tor tor connection_is_writing
8689 0.5866 tor tor TO_OR_CONN
7890 0.5326 tor tor connection_state_is_connecting
```Tor: unspecifiedGeorge KadianakisGeorge Kadianakishttps://gitlab.torproject.org/legacy/trac/-/issues/4631Idea to make consensus voting more resistant2020-06-13T15:51:31ZSebastian HahnIdea to make consensus voting more resistantThis is an idea how to improve the current situation, where sometimes a directory authority is slow to get its vote out to the other dirauths, and so the dirauths don't all have the same sets of votes. To simplify, I'm illustrating with ...This is an idea how to improve the current situation, where sometimes a directory authority is slow to get its vote out to the other dirauths, and so the dirauths don't all have the same sets of votes. To simplify, I'm illustrating with an example of three dirauths:
at :50, all dirauths make their vote and start uploading. auth1 and auth2 get their vote to all auths, but auth3 doesn't. it cannot publish a vote to auth1 at all, and it takes more than 2.5 minutes to publish its vote to auth2. at :52:30, all auths try fetching the votes they're missing from the other auths, so auth1 asks auth2 for auth3's vote, and auth2 asks auth1 for auth3's vote. auth3 asks nobody, and nobody asks auth3. At this point, neither auth1 nor auth2 have auth3's vote. auth3 now (at, for example, :53:30) succeeds publishing to auth2, so auth1 now has a vote from auth1 and auth2, auth2 and auth3 have a vote from auth1, auth2, and auth3. At :55 the auths try to make a consensus, but auth1 will end up with a different consensus than auth2 and auth3.
My idea to make this less of a problem would be that only for two minutes will we accept a vote that gets pushed to us, and anything we get later than that is considered "too late" and will be dropped. At :52:30 minutes, we still go ahead and try and fetch all votes from all the other authorities, and if they have a vote we will accept it. We repeat that fetching of all votes that we don't have at 53:00, 53:30, 54:00 and 54:30. That way, a delayed publication of the original vote will not cause this kind of split, where the dirauths have different opinions on who has voted, so only the dirauth that took more than 2 minutes to publish its descriptor to any of the other dirauths will be affected. There's still a race condition here, which is when a dirauth (within two minutes) only publishes to one other dirauth, and then that dirauth gets so slow it cannot get the vote to any of the other votes. But since it was fast enough to get the vote the first time, hopefully that's rather rare.
Does this all sound viable? Am I overlooking something?
Update: This bug was introduced in Tor 0.2.0.5-alpha, with the v3 authority voting code.Tor: 0.4.4.x-finalteorteorhttps://gitlab.torproject.org/legacy/trac/-/issues/4644tor_addr_is_internal shows up in profiles -- can we fix that?2020-06-13T14:15:41ZRobert Ransomtor_addr_is_internal shows up in profiles -- can we fix that?`tor_addr_is_internal` is in each of the three profiles on #4626. It shouldn't be there.`tor_addr_is_internal` is in each of the three profiles on #4626. It shouldn't be there.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/4826Write proposal for improved consensus voting schedules2020-06-13T14:16:30ZRoger DingledineWrite proposal for improved consensus voting schedulesSebastian suggests that we revise the schedule for consensus voting such that there's a cutoff after which we discard votes from the original authority. So phase 1a is to publish your vote to every authority, phase 1b is to ask every aut...Sebastian suggests that we revise the schedule for consensus voting such that there's a cutoff after which we discard votes from the original authority. So phase 1a is to publish your vote to every authority, phase 1b is to ask every authority for votes you're missing, and during phase 1b we won't accept phase 1a votes.
The goal here is to avoid consensus failures that occur when an authority uploads a vote during phase 1b, and some authorities end up thinking everybody knows it, yet some don't know it.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/4872When the valid consensus is old but still barely valid, are the descriptors r...2020-06-13T14:16:43ZSebastian HahnWhen the valid consensus is old but still barely valid, are the descriptors referenced in it still valid?If the answer is "no", then that might explain why the dirauths are now getting bombarded with traffic while we haven't had a valid consensus in > 18h. Also relevant to ideas extending the validity of consensusesIf the answer is "no", then that might explain why the dirauths are now getting bombarded with traffic while we haven't had a valid consensus in > 18h. Also relevant to ideas extending the validity of consensusesTor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/5992META: Decentralize directory authorities as far as safely possible2020-06-13T14:19:57ZAndrew LewmanMETA: Decentralize directory authorities as far as safely possibleWe currently rely on a handful of directory authorities and their operators to generate and maintain the consensus of the Tor network. They're also the default place to go for clients to bootstrap into the network. Some research has been...We currently rely on a handful of directory authorities and their operators to generate and maintain the consensus of the Tor network. They're also the default place to go for clients to bootstrap into the network. Some research has been started into replacing the individual directory authorities with anonymity-preserving distributed hash table (DHT) models. Further this work, using simulators and/or private tor networks for handling future growth and expansion of the public tor network.Tor: very long termhttps://gitlab.torproject.org/legacy/trac/-/issues/6716Reject routerstatuses with orport==02020-06-13T14:22:06ZNick MathewsonReject routerstatuses with orport==0These could only arrive in error, and might be an attempt to trigger #6690. Nobody legit can generate them now; we shouldn't start accepting them without more design, I think.These could only arrive in error, and might be an attempt to trigger #6690. Nobody legit can generate them now; we shouldn't start accepting them without more design, I think.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/6773DirServer lines should take more than one "orport="2020-06-13T14:22:21ZLinus Nordberglinus@torproject.orgDirServer lines should take more than one "orport="In order for clients and relays to be able to contact an authority
server over IPv6 we should expand the DirServer line to accept more
than one OR port.
Do we prefer "orport=ADDR0,ADDR1,..." or "orport=ADDR0 orport=ADDR1 ..."?
Or perhap...In order for clients and relays to be able to contact an authority
server over IPv6 we should expand the DirServer line to accept more
than one OR port.
Do we prefer "orport=ADDR0,ADDR1,..." or "orport=ADDR0 orport=ADDR1 ..."?
Or perhaps something completely different?
Note that an ADDR will be IP-ADDRESS ":" PORT-NUMBER rather than
todays PORT-NUMBER..
We need to
- add field(s) to trusted_dir_server_t
- fix the parsing in parse_dir_server_line(), probably by calling tor_addr_port_lookup().
We also need #6772 (Fall back to alternative OR port if the current
fails) to be implemented for this to be useful, f.ex. in
directory_post_to_dirservers().Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/6777add config option to not rate limit authority dir conns2020-06-13T14:22:23ZRoger Dingledineadd config option to not rate limit authority dir connsDuring today's consensus fiasco, several authorities were hitting their configured bandwidth rates. In moria1's case, we were using the default 5MB/10MB, and we were basically sustaining 5MB/s of directory output for 6+ hours. Most thing...During today's consensus fiasco, several authorities were hitting their configured bandwidth rates. In moria1's case, we were using the default 5MB/10MB, and we were basically sustaining 5MB/s of directory output for 6+ hours. Most things weren't finishing getting written -- including votes.
weasel suggested a feature where we allow dir conns to/from authorities to go above our bandwidth limits.
I was thinking we would implement it just by making connection_is_rate_limited() say "no" for them.
but weasel suggested that we count the bytes, and reduce them from our totals, but not limit the conns. That sounds worthwhile but more complex.
On the theory that we want this hack in rather than waiting forever for the elegant solution, I convinced weasel that he should be ok with the simpler approach.
Heck, maybe rather than making it a config option, we should just make it standard behavior for authorities.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/6790Write proposal draft for directory mirrors to accept, aggregate and hand off ...2020-06-13T14:22:28ZMike PerryWrite proposal draft for directory mirrors to accept, aggregate and hand off descriptors to dirauthsIn the event of DoS or braindead client behavior, directory authorities may need to rate limit or restrict connections. See #2665.
Under these conditions, it would be useful if directory mirrors could also accept relay descriptor data, ...In the event of DoS or braindead client behavior, directory authorities may need to rate limit or restrict connections. See #2665.
Under these conditions, it would be useful if directory mirrors could also accept relay descriptor data, aggregate it, and hand it off to the authorities after eliminating duplicates. This coupled with #572 should allow the dirauths to better handle sudden traffic spikes by rate limiting or firewalling, without degrading the network.
https://gitweb.torproject.org/torspec.git/blob/HEAD:/proposals/147-prevoting-opinions.txt has a related idea, but we may want a push method rather than a pull?Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/7126Multipath consensus integrity verification2020-06-13T14:23:39ZMike PerryMultipath consensus integrity verificationWe want to allow clients to use old consensuses safely without the directory authorities producing new ones. One of the problems with this is ensuring that directory mirrors don't game this time period to feed clients their favorite stal...We want to allow clients to use old consensuses safely without the directory authorities producing new ones. One of the problems with this is ensuring that directory mirrors don't game this time period to feed clients their favorite stale consensus that is still acceptable.
A related problem is "Can we do anything to mitigate malicious targeted consensus delivery in the event that a majority of dirauth signing keys are compromised?"
The common approach for this type of problem is multipath Perspectives-style key authentication. There are several ways we could authenticate the consensus documents in this model. For example, an append-only data structure such as a signed git repo could be created to store consensus hashes for all time. Tor clients could also be modified to store their own chain of observed consensus hashes in a file. In this way, potentially targeted users could drop their consensus hash history onto a USB key, mail it, relocate or otherwise bootstrap an alternate path to the git repo, and verify their connection was not compromised.
A more streamlined Tor-based solution is to extend current Tor directory protocols to allow the set of directory mirrors from #572 to be queried about the latest consensus time they have seen, and for the hash for that consensus time. Clients could then query k of these mirrors, determine the most recent consensus hash that all k mirrors agree on, and request that consensus document from the mirrors that have it. Such requests would be authenticated by the dir mirror identity keys, which would be stored in the Tor source code as part of #572.
This would require additional directory commands ("Tell me the timestamp on your most recent consensus" and "Tell me the hash of that consensus"), as well as some client logic.
The client logic is likely to be the complicated part. It's possible that the dirport commands could be added earlier, allowing us to experiment with various client approaches on the longer term.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/7148Even better parameter voting protocol2020-06-13T14:23:46ZNick MathewsonEven better parameter voting protocolOur current parameter voting protocol is backwards in how many voters need to exist for a parameter before we can vote for it. Right now we accept the parameter into the consensus if it has a majority of all authorities, or at least 3 a...Our current parameter voting protocol is backwards in how many voters need to exist for a parameter before we can vote for it. Right now we accept the parameter into the consensus if it has a majority of all authorities, or at least 3 authorities. But that fails when most authorities are abstaining: 3 rogue authorities could force the value of an unset parameter to whatever they want.
A stopgap solution (for which roger is writing a ticket) is for all authorities to vote on all parameters, and to have most/all authorities begin voting on any new parameter before we release software that looks for it.
But surely we can do better than that.
We need to write a little proposal for this before the little-proposal deadline to implement it in 0.2.4.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/7193Tor's sybil protection doesn't consider IPv62020-06-13T14:24:05ZGeorge KadianakisTor's sybil protection doesn't consider IPv6Some bugs:
`get_possible_sybil_list()` doesn't consider IPv6 addresses at all.
~~`clear_status_flags_on_sybil()` doesn't clear `ipv6_addr` (and maybe more flags).~~ Obsoleted by consensus method 24, because it requires the Running flag...Some bugs:
`get_possible_sybil_list()` doesn't consider IPv6 addresses at all.
~~`clear_status_flags_on_sybil()` doesn't clear `ipv6_addr` (and maybe more flags).~~ Obsoleted by consensus method 24, because it requires the Running flag for a router to be in the consensus.
Also, maybe we could add a `log_notice` or `log_info` to mention if and which relays were found to be part of a Sybil attack.
~~Finally (and this is a minor bug), in `get_possible_sybil_list()` we assume that `max_with_same_addr < max_with_same_addr_on_authority`, which is true in the current tor network, but maybe it shouldn't be an inherent property of the source code.~~ Obsoleted by #20960: max_with_same_addr_on_authority has been removed.Tor: 0.4.4.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/7534AUTHDIR_NEWDESCS example2017-05-25T18:09:58ZDamian JohnsonAUTHDIR_NEWDESCS exampleHi directory authority operators. As mentioned on irc yesterday I need an example or two of AUTHDIR_NEWDESCS events for stem's unit tests (... and also to figure out how I should approach parsing them - #7533).
Would an authority operat...Hi directory authority operators. As mentioned on irc yesterday I need an example or two of AUTHDIR_NEWDESCS events for stem's unit tests (... and also to figure out how I should approach parsing them - #7533).
Would an authority operator mind providing me with a couple examples of AUTHDIR_NEWDESCS from their authority? Ideally it would be nice to have an example of the ACCEPTED, DROPPED, and REJECTED actions for the unit tests, though not important.
Thanks! -Damian
PS. Sorry about the vague component assignment. We don't have one specifically for the authority operators...Tor: unspecifiedDamian JohnsonDamian Johnsonhttps://gitlab.torproject.org/legacy/trac/-/issues/8163It is no longer deterministic which Sybils we omit2020-06-13T14:27:09ZRoger DingledineIt is no longer deterministic which Sybils we omitIt seems that each dir auth is voting for its favorite two relays, in the case of Sybils. The result is that none of them get listed in the consensus (as opposed to the "two of them" that our design says do).
I think the issue is in com...It seems that each dir auth is voting for its favorite two relays, in the case of Sybils. The result is that none of them get listed in the consensus (as opposed to the "two of them" that our design says do).
I think the issue is in compare_routerinfo_by_ip_and_bw_().
Maybe it's here:
```
node_first = node_get_by_id(first->cache_info.identity_digest);
node_second = node_get_by_id(second->cache_info.identity_digest);
first_is_running = node_first && node_first->is_running;
second_is_running = node_second && node_second->is_running;
```
The "is_running" part here is suspicious -- if we cleared flags from some of them last time through the loop, does that change the order of picking them this time through the loop?
(Also, the comments for the function don't mention comparing identity digests as the last resort. They probably should.)Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/8494Document MaxAdvertisedBandwidth in the bandwidth list spec2020-06-13T14:28:17ZTracDocument MaxAdvertisedBandwidth in the bandwidth list specI've set MaxAdvertisedBandwidth to 100 KB (though RelayBandwidthRate and RelayBandwidthBurst are set to 128 KB and 153 KB, respectively). Accordingly, the relay does not advertise any bandwidth higher than 100 KB. However, the consensus...I've set MaxAdvertisedBandwidth to 100 KB (though RelayBandwidthRate and RelayBandwidthBurst are set to 128 KB and 153 KB, respectively). Accordingly, the relay does not advertise any bandwidth higher than 100 KB. However, the consensus is reporting greater bandwidth:
`valid-after 2013-03-17 01:00:00`
`r PrivateJoker hWF85kNElIsLrCPNTiIkX39mwcg r5DGWEd9ufF4TFXJITuOhw+by6I 2013-03-16 19:18:56 107.197.196.79 443 80`
`s Fast Named Running Stable V2Dir Valid`
`v Tor 0.2.4.11-alpha`
`w Bandwidth=108`
`p reject 1-65535`
The bandwidth line from the descriptor looks like:
```
bandwidth 102400 156672 155648
```
My understanding is that clients use the consensus bandwidth measurement to weigh which paths to choose (correct me if I'm wrong).  If this is true, then the consensus should not report bandwidth greater than MaxAdvertisedBandwidth.  Perhaps the consensus should never show bandwidth greater than a relay's chosen RelayBandwidthRate?
**Trac**:
**Username**: alphawolfTor: 0.3.5.x-finaljugajugahttps://gitlab.torproject.org/legacy/trac/-/issues/8684bwauth files don't include opinions about Authorities2020-06-13T16:19:09ZRoger Dingledinebwauth files don't include opinions about AuthoritiesIt appears that moria1's bwauth doesn't provide an opinion about moria1, or any authority for that matter.
And no authorities provide Measured lines for turtles.
I suspect there's code in the bwauth to skip measuring authorities.
That...It appears that moria1's bwauth doesn't provide an opinion about moria1, or any authority for that matter.
And no authorities provide Measured lines for turtles.
I suspect there's code in the bwauth to skip measuring authorities.
That's going to be bad now that we've turned on #8435.
The right fix might be to change the bwauths. But for now I'm filing as a Tor bug until we figure out where to fix it.https://gitlab.torproject.org/legacy/trac/-/issues/8688bwauths need to upgrade (to start measuring even non-Fast relays)2020-06-13T16:19:10ZRoger Dingledinebwauths need to upgrade (to start measuring even non-Fast relays)With the recent commit of #8435 to Tor, directory authorities will leave off the Fast flag from any non-measured relay.
I believe bwauths currently don't measure relays that don't have the Fast flag.
Bad cycle we're about to have here....With the recent commit of #8435 to Tor, directory authorities will leave off the Fast flag from any non-measured relay.
I believe bwauths currently don't measure relays that don't have the Fast flag.
Bad cycle we're about to have here.
See https://trac.torproject.org/projects/tor/ticket/8273#comment:6 and comments below it for context. Aaron has a proposed patch.Aaron GibsonAaron Gibsonhttps://gitlab.torproject.org/legacy/trac/-/issues/9062Authorities should describe their bwauth version in their votes2020-06-13T14:29:43ZNick MathewsonAuthorities should describe their bwauth version in their votesRight now, there's not a great way to tell which authorities have upgraded their bwauths, which creates trouble as in the case of #8688 . If we have future bwauth software report its version, and we have future Tor authorities check tha...Right now, there's not a great way to tell which authorities have upgraded their bwauths, which creates trouble as in the case of #8688 . If we have future bwauth software report its version, and we have future Tor authorities check that version and report it in their networkstatus votes, then we'll not stumble into that situation again.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/9290Use something other than "known relay" to decide on rate in connection_or_upd...2020-06-13T14:30:35ZNick MathewsonUse something other than "known relay" to decide on rate in connection_or_update_token_buckets_helper() on authoritiesOn #tor-dev , Beeps says:
```
13:18 < Beeps> connection_or_update_token_buckets_helper() will not limit speed
if relay knows desc. You can upldoad desc to any auth. Before
limit speed you need protect all a...On #tor-dev , Beeps says:
```
13:18 < Beeps> connection_or_update_token_buckets_helper() will not limit speed
if relay knows desc. You can upldoad desc to any auth. Before
limit speed you need protect all auths or limit speed for them.
5 of them are victims for cheaters for now.
```
In other words, anybody can get the higher limit from an authority by uploading a descriptor with their ID, whether they're really a relay or not. That's annoying.
One fix would be to change the behavior of connection_or_digest_is_known_relay to require that the relay be present in the consensus. (Would this hurt bandwidth measurement?)Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/9368Turn static throttling on in the live network2020-06-13T16:19:12ZRoger DingledineTurn static throttling on in the live networkThe feature is all implemented, and it works as far as we know. Some static throttling of super-loud clients would help free up the network for the rest of the users.
There are three parts to this ticket:
A) Get the directory authoriti...The feature is all implemented, and it works as far as we know. Some static throttling of super-loud clients would help free up the network for the rest of the users.
There are three parts to this ticket:
A) Get the directory authorities to add the right consensus params. And also decide what numbers to use. I think "perconnbwrate=50000 perconnbwburst=10000000" (i.e. burst of 10MB and rate of 50KB/s) would do it.
B) Before it can go live, we need to do something about the bwauthorities -- they suck down 64MB files from the fastest relays, and step A will throttle them, leading to confused results. The simplest hack I've thought of is to make them relays, and then they don't get throttled. (#9369)
C) Some way to measure if it's going right (general performance improves) or wrong (it's harming normal users). Ordinarily I'd be a big fan of getting all this infrastructure set up and doing an experiment, but that's going to take a year or more at this rate, and we could make a difference right now.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/9476Completely drop support for Tor 0.2.2.x2020-07-31T12:47:30ZNick MathewsonCompletely drop support for Tor 0.2.2.xWe should remove 0.2.2.x from the recommended version list.
We should stop accepting Tor 0.2.2.x nodes in the network: that release series is completely unsupported.
Finally dropping 0.2.2.x will let us start deprecating things that we...We should remove 0.2.2.x from the recommended version list.
We should stop accepting Tor 0.2.2.x nodes in the network: that release series is completely unsupported.
Finally dropping 0.2.2.x will let us start deprecating things that we'd like to throw away, like the renegotiation-based handshake.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/9775Authorities should report when they don't vote Running but some addresses are...2020-06-13T14:32:05ZRoger DingledineAuthorities should report when they don't vote Running but some addresses are still reachableWe withhold the Running flag if *any* of the relay's addresses is unreachable.
I just spent a while debugging 'trouble', where his IPv4 address was set up correctly, and moria1 kept logging
```
Sep 19 02:22:45.986 [info] dirserv_orconn_...We withhold the Running flag if *any* of the relay's addresses is unreachable.
I just spent a while debugging 'trouble', where his IPv4 address was set up correctly, and moria1 kept logging
```
Sep 19 02:22:45.986 [info] dirserv_orconn_tls_done(): Found router
$67DE1CFEC8957833EAEE623F561BF57EB2D9CF2B~trouble at 5.9.125.198 to be
reachable at 5.9.125.198:443. Yay.
```
but his IPv6 address was port forwarding incorrectly.
The result was that 2 relays voted Running (I assume they're the ones who don't have IPv6 support), and the rest voted not Running.
One option is that we (e.g. consensus-health) should paw through the votes each hour to look for relays that have that same pattern of Running votes so we can contact them.
Another option would be for the authority votes to add an annotation somewhere, like in their votes, saying "partially reachable" or some such. That approach has the benefit that we could automatically have a record over time of how big an issue this is.
(If it's a big issue, it might argue for working harder to put partially reachable relays into the consensus somehow.)Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/9954Replace broadcast voting protocol with something more robust2020-06-13T14:32:31ZNick MathewsonReplace broadcast voting protocol with something more robustWhile discussing #8244, Aniket Kate had some comments about our voting protocol:
>The only modification I would like to suggest is to replace your broadcast protocol in the voting round. It is not secure against what is called "dangerou...While discussing #8244, Aniket Kate had some comments about our voting protocol:
>The only modification I would like to suggest is to replace your broadcast protocol in the voting round. It is not secure against what is called "dangerous chain of failures" in distributed computing research ; i.e., if one authority crash in per sub-phase (1A, 1B, ...), then at least one working (or correct) authority might have more votes than others.
>
>To explain it, I am attaching Lorenzo Alvisi's (UT-Austin) notes along with email. I thought those will be easy to understand than a research paper. In these notes,
> * Dangerous chain is explained on page 7
> * Two protocols that overcome this (possibly extremely unlikely situation) problem are available on page 12
> * I would encourage you to incorporate the early stopping protocol as, in absence of any failure, it completes in the exactly same manner as your current protocol. I think it will not add too much to your current broadcast code, but at the same time take care of gradual failures of directory authorities.
> * The protocol description does not mention signatures as they are defined for non-malicious setting. Nevertheless, it will be easy for me to include signatures to the description at appropriate places if you choose to use it.
I replied with:
>I'll check this out, but I'm not sure whether the change is worth it in this case. If I understand correctly, the failure mode here is no consensus is generated if crashes happen at exactly the wrong times, or sends votes to others at exactly the wrong times. But our protocol can tolerate up to 24/48 hours worth of non-generated consensuses. (Our usual approach when this happens has been "Just debug it".)
>
>I'll check out the complexity of the stepping protocol, though.
Still, more minds should think on this.
I'm investigating whether I have permission to post Lorenzo Alvisi's slies, or whether they're already online.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/9982Use a better password-based KDF for controller passwords, authority identity ...2020-06-13T14:32:41ZNick MathewsonUse a better password-based KDF for controller passwords, authority identity key encryption, and moreWith the ed25519 key transition, we'll want to start bringing offline identity keys to regular relay operators (and ideally hidden service operators too somehow, if we can figure out a non-stupid way for it to interact with #8106).
As w...With the ed25519 key transition, we'll want to start bringing offline identity keys to regular relay operators (and ideally hidden service operators too somehow, if we can figure out a non-stupid way for it to interact with #8106).
As we do this, we'll want a better password-based KDF. Right now we have the very silly "NID_pbe_WithSHA1And3_Key_TripleDES_CBC" for protecting authority keys, and the very silly OpenPGP KDF for hashing controller passwords. Let's do something from the 21st century.
This is a bikeshed discussion. I nominate: "Derive keys with scrypt-jane, with salsa20/8 and SHA512."Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/10221Implement BGP malicious route checks before publishing descriptor in consensus2020-06-13T15:25:47ZTracImplement BGP malicious route checks before publishing descriptor in consensusAlternatively, treat as normal and simply flag the BGP route as malicious or not for the listed endpoints in a consensus.
This is in response to observed, repeated, malicious route jacking attacks for specific address ranges through mon...Alternatively, treat as normal and simply flag the BGP route as malicious or not for the listed endpoints in a consensus.
This is in response to observed, repeated, malicious route jacking attacks for specific address ranges through monkey-in-the-middle attackers.
"Malicious route jacking" is explicitly mentioned here as distinct from anomalous route changes or advertisement behavior, nor does it encompass benign incompetence affecting widespread route behavior of an indiscriminate nature.
See also:
http://www.renesys.com/2013/11/mitm-internet-hijacking/
http://www.renesys.com/2010/11/chinas-18-minute-mystery/
**Trac**:
**Username**: anonTor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/10542Bug when certificate expired: Generated a networkstatus consensus we couldn't...2020-06-13T14:33:41ZRoger DingledineBug when certificate expired: Generated a networkstatus consensus we couldn't parse.```
Jan 02 17:55:01.575 [notice] Time to compute a consensus.
Jan 02 17:55:01.592 [info] networkstatus_compute_consensus(): Generating consens
us using method 17.
Jan 02 17:55:01.784 [notice] Computed bandwidth weights for Case 3be (E sc...```
Jan 02 17:55:01.575 [notice] Time to compute a consensus.
Jan 02 17:55:01.592 [info] networkstatus_compute_consensus(): Generating consens
us using method 17.
Jan 02 17:55:01.784 [notice] Computed bandwidth weights for Case 3be (E scarce,
Wee=1, Wmd == Wgd) with v10: G=6270726 M=1646308 E=887962 D=4408384 T=13213380
Jan 02 17:55:01.846 [warn] ID on signature on network-status vote does not match
any declared directory source.
Jan 02 17:55:01.879 [info] dump_desc(): Unable to parse descriptor of type v3 ne
tworkstatus. See file unparseable-desc in data directory for details.
Jan 02 17:55:01.880 [err] networkstatus_compute_consensus(): Bug: Generated a ne
tworkstatus consensus we couldn't parse.
Jan 02 17:55:01.884 [warn] Couldn't generate a ns consensus at all!
Jan 02 17:55:01.885 [info] networkstatus_compute_consensus(): Generating consens
us using method 17.
Jan 02 17:55:02.078 [notice] Computed bandwidth weights for Case 3be (E scarce,
Wee=1, Wmd == Wgd) with v10: G=6270726 M=1646308 E=887962 D=4408384 T=13213380
Jan 02 17:55:02.140 [warn] ID on signature on network-status vote does not match
any declared directory source.
Jan 02 17:55:02.140 [err] networkstatus_compute_consensus(): Bug: Generated a ne
tworkstatus consensus we couldn't parse.
Jan 02 17:55:02.145 [warn] Couldn't generate a microdesc consensus at all!
Jan 02 17:55:02.145 [warn] Couldn't generate any consensus flavors at all.
```
Happens when my authority cert has expired.
Bug 1 is that it says Bug: but it happens. Bug 2 is that it's severity [err] but Tor doesn't die.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/10968Authorities should use past consensuses to decide how to vote on relay flags2020-06-13T14:34:16ZGeorge KadianakisAuthorities should use past consensuses to decide how to vote on relay flagsAt the moment, each authority decides what flags to assign to each node based on its own memory. This means that authorities that have been started recently have a different impression -- compared to more long-lived authorities -- about ...At the moment, each authority decides what flags to assign to each node based on its own memory. This means that authorities that have been started recently have a different impression -- compared to more long-lived authorities -- about some relays .
Something that might make more sense is if authorities used past consensuses to get a better idea about the stability and speed of relays.
Since authorities don't keep past consensuses around, a way to do the above might be to create a script that each authority runs, downloads the past consensuses, calculates statistics about all nodes, and then it creates a file with those statistics. Then the authority loads that file, and uses it to update its knowledge base.
There might be better approaches.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/11121Revocation process for authority keys2020-06-13T14:34:34ZNick MathewsonRevocation process for authority keysRight now, we don't have a proposal that explains how to do revocation on an authority's signing keys. We should write one, and eventually implement it.Right now, we don't have a proposal that explains how to do revocation on an authority's signing keys. We should write one, and eventually implement it.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/11158Write a proposal for a "couldn't reach consensus" statement2020-06-13T14:34:44ZNick MathewsonWrite a proposal for a "couldn't reach consensus" statementWhen authorities can't reach a consensus, it would be good if every authority who couldn't reach consensus would sign an "I couldn't reach consensus" statement, so that it's easier to distinguish between "there's a consensus that I cant'...When authorities can't reach a consensus, it would be good if every authority who couldn't reach consensus would sign an "I couldn't reach consensus" statement, so that it's easier to distinguish between "there's a consensus that I cant' find" and "There is possibly/maybe no consensus this period."
This may be good for other stuff too; I don't remember all the details from the dev mtg discussion.
Somebody should write a proposal for this.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/11207Sybil selection should be trickier to game2020-06-13T14:34:46ZNick MathewsonSybil selection should be trickier to gameIn response to some of the hidden service attack papers from 2013, we made it harder to use sybil-based tricks to move around the HSDir hash ring. But really, we should come up with a better way to shut down sybil-based tricks in genera...In response to some of the hidden service attack papers from 2013, we made it harder to use sybil-based tricks to move around the HSDir hash ring. But really, we should come up with a better way to shut down sybil-based tricks in general, in case there are more that we don't know about.
One place to start would be with the question: how often does the sybil code actually get invoked for legit nodes not run by security researchers? If the answer is "infrequently" , then perhaps we could move to an even simpler, blunter approach of "Call all nodes on an IP down for as long as there are too many verified-connectable nodes on that IP."
Or we might take another approach to selecting which nodes to list. #8710 isn't right, but perhaps something else might be.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/11264Relay has Exit flag but short policy says reject *?2020-06-13T14:40:45ZRoger DingledineRelay has Exit flag but short policy says reject *?https://atlas.torproject.org/#details/65C35C03571307D7546D6978605A6B11B473F6EE
its short exit policy is reject *:*
but check out its actual exit policy
and it has the Exit flag
This seems like a contradiction, yes?https://atlas.torproject.org/#details/65C35C03571307D7546D6978605A6B11B473F6EE
its short exit policy is reject *:*
but check out its actual exit policy
and it has the Exit flag
This seems like a contradiction, yes?Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/11327Dir auths should choose Fast and Guard flags by consensus weight if they don'...2020-06-13T14:35:06ZRoger DingledineDir auths should choose Fast and Guard flags by consensus weight if they don't measureIn #8435 we made directory-authorities-that-run-bwauths stop voting Fast or Guard for relays they hadn't measured yet.
But as I pointed out in https://trac.torproject.org/projects/tor/ticket/8435#comment:13, since only a minority of dir...In #8435 we made directory-authorities-that-run-bwauths stop voting Fast or Guard for relays they hadn't measured yet.
But as I pointed out in https://trac.torproject.org/projects/tor/ticket/8435#comment:13, since only a minority of dir auths run bwauths, the majority of dir auths are still voting Fast and Guard based on descriptor bandwidths.
So while the title of ticket #8435 says "Ignore advertised bandwidths for flags once we have enough measured bandwidths", the ChangeLog entry is more accurate:
```
- Directory authorities that have more than a threshold number
of relays with measured bandwidths now treat relays with unmeasured
bandwidths as having bandwidth 0. Resolves ticket 8435.
```
We should at some point actually do the original goal, which is to give Fast to the 7/8s of relays whose consensus weights are highest, and Guard to the 1/2 of relays whose consensus weights are highest and who match the other guard constraints.Tor: unspecifiedTvdWTvdWhttps://gitlab.torproject.org/legacy/trac/-/issues/11328Dir auths should compute Guard WFU using the consensus, not private history2020-06-13T14:35:07ZRoger DingledineDir auths should compute Guard WFU using the consensus, not private historyCurrently directory authorities track the presence of each relay and keep notes about their view locally. Then when it comes time to vote about Guard, they look at their notes and decide what fraction of the past interval the relay was u...Currently directory authorities track the presence of each relay and keep notes about their view locally. Then when it comes time to vote about Guard, they look at their notes and decide what fraction of the past interval the relay was up for.
But it doesn't matter anymore to clients whether the directory authority could reach the relay for that time. The question as of the v3 directory design is whether the relay was in the consensus.
So it seems like the directory authorities should be basing their measurements off "is it in the consensus this hour".Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/11448Dirauths must support multiple relay identity keys at once2020-06-13T14:35:22ZRobert RansomDirauths must support multiple relay identity keys at onceAs discussed on [https://blog.torproject.org/blog/openssl-bug-cve-2014-0160], directory authorities must rotate their relay identity keys in order to recover from possible exposure due to the ‘Heartbleed’ bug. (A dirauth's relay identit...As discussed on [https://blog.torproject.org/blog/openssl-bug-cve-2014-0160], directory authorities must rotate their relay identity keys in order to recover from possible exposure due to the ‘Heartbleed’ bug. (A dirauth's relay identity key could be used by a MITM attacker to feed clients an outdated consensus, for example.)
There are two requirements in order to do this without causing a network meltdown:
* A dirauth must be able to sign relay descriptors using multiple relay identity keys at once.
* A dirauth must be able to operate multiple ORPorts at once, with (possibly) different relay identity keys.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/11458A newer signing cert should innoculate us against older ones?2020-06-13T14:35:25ZRoger DingledineA newer signing cert should innoculate us against older ones?Sometime in the past year or two somebody might have stolen 7 of the 9 active directory signing keys. They don't expire for several months or more.
If the existing directory authorities rotate to new signing keys, that doesn't really ch...Sometime in the past year or two somebody might have stolen 7 of the 9 active directory signing keys. They don't expire for several months or more.
If the existing directory authorities rotate to new signing keys, that doesn't really change the fact that older ones remain valid.
If we change Tor to look at its cached-certs and refuse to believe in a signing key if it's convinced there's a newer one, then we can invalidate older ones by generating newer ones.
That approach wouldn't protect users who are bootstrapping for the first time, but it would protect them if they'd already bootstrapped. Is this a worthwhile improvement?
Note that we'd have to sort out edge cases like #11457 -- basically in this case it would mean that if you ever generate a signing key too far in the future and then also want to go back to an earlier one, you're fucked. But has anybody ever needed to do that?
To tolerate rotation better, we'd want the logic to be something like the suggested fix in #11454: only disbelieve a cert if a) we have a newer one and b) the one we're disbelieving is sufficiently older than now.
We could also think about shipping with a cached-certs file to keep raising the bar as users upgrade.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/11624Malicious relays may be able to be assigned Exit flag without exiting anywhere2020-06-13T14:35:49ZTom Rittertom@ritter.vgMalicious relays may be able to be assigned Exit flag without exiting anywhereThe IANA for Multicast addresses indicates there are many /8's that are not yet allocated[0], such as 232.0.0.0-232.255.255.255.
The current voting mechanism in exit_policy_is_general_exit_helper allows an Exit flag to be assigned if it...The IANA for Multicast addresses indicates there are many /8's that are not yet allocated[0], such as 232.0.0.0-232.255.255.255.
The current voting mechanism in exit_policy_is_general_exit_helper allows an Exit flag to be assigned if it supports exiting to at least one /8 for 2 out of 3 ports of [80, 443, 6667]. exit_policy_is_general_exit_helper calls tor_addr_is_internal, this function only looks for the following IPv4 spaces: 10/8, 0/8, 127/8, 169.254/16, 172.16/12, 192.168/16.
A relay could put one of the unallocated IPv4 blocks and fool the Directory Authorities. Of course, if such a relay really wanted to do this, they could also set their relay up to exit to an uninteresting /8 no one would ever visit, such as one of the many military/DoD /8's.
Zack Weinberg's thread on tor-relays seems to have a good collection of addresses[1]. Other sources are the exclude list from massscan[2] and the IANA registry[3].
This would probably doubly true for IPv6, which only looks for fc00/7, fe80/10, fec0/10 - but right now exit_policy_is_general_exit_helper ignores IPv6.
[0] http://www.iana.org/assignments/multicast-addresses/multicast-addresses.xhtml
[1] https://lists.torproject.org/pipermail/tor-relays/2014-April/004431.html
[2] https://github.com/robertdavidgraham/masscan/blob/master/data/exclude.conf
[3] http://www.iana.org/assignments/ipv4-address-space/ipv4-address-space.xhtmlTor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/11742Remove /tor/dbg-stability.txt URL served by directory authorities2020-06-13T14:36:02ZKarsten LoesingRemove /tor/dbg-stability.txt URL served by directory authoritiesTor 0.2.1.6-alpha added a /tor/dbg-stability.txt URL on directory authorities to help debug WFU and MTBF calculations. But nobody is using it, and directory authorities shouldn't expose any more data than necessary. Also, the better ap...Tor 0.2.1.6-alpha added a /tor/dbg-stability.txt URL on directory authorities to help debug WFU and MTBF calculations. But nobody is using it, and directory authorities shouldn't expose any more data than necessary. Also, the better approach to debugging how relays are included in votes is proposal 164. We should remove this URL.Tor: 0.2.5.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/13059Create bad-relays file2020-06-13T14:38:18ZSebastian HahnCreate bad-relays fileIn the wake of #12899, it became apparent that redoing the approved-routers file is a good idea. It'll be replaced by a torrc-style file called bad-relays.In the wake of #12899, it became apparent that redoing the approved-routers file is a good idea. It'll be replaced by a torrc-style file called bad-relays.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/13078Add a ROUTERSET_ML config type, accept spaces in fingerprints?2020-06-13T14:38:22ZSebastian HahnAdd a ROUTERSET_ML config type, accept spaces in fingerprints?In my work for #13059, I stumbled over the fact that CONFIG_TYPE_ROUTERSET only accepts a single line. This makes editing a large file cumbersome if we use the \ syntax to end lines. Instead, I would propose adding a CONFIG_TYPE_ROUTERSE...In my work for #13059, I stumbled over the fact that CONFIG_TYPE_ROUTERSET only accepts a single line. This makes editing a large file cumbersome if we use the \ syntax to end lines. Instead, I would propose adding a CONFIG_TYPE_ROUTERSET_ML option which unions the specified routersets.
Also, how about teaching routersets in general to ignore spaces in a fingerprint? arma notes that this might confuse people who think spaces should act as a separator, but this doesn't work currently and also goes against the manpage, so I wouldn't worry too much about it.
What do you think? I really want to do the former, the latter I don't care as much about, tho I believe it would be nice to have.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/13167Export dirauth files via directory protocol2020-06-13T14:38:43ZSebastian HahnExport dirauth files via directory protocolMetrics downloads a few files (consensus, descriptors, extrainfo, v3 votes) from dirauths for further processing. It'd be good if all these files could be served by Tor directly, as this would alleviate the need for the dirauth ops to ta...Metrics downloads a few files (consensus, descriptors, extrainfo, v3 votes) from dirauths for further processing. It'd be good if all these files could be served by Tor directly, as this would alleviate the need for the dirauth ops to take special steps to make these files available.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/13234Consensus Algorithm Causes Flip-Flopping2020-06-13T14:39:03ZTracConsensus Algorithm Causes Flip-FloppingI had a relay running on 94.23.214.156. It's an unmetered VPS that is NATed with other VPSes, so everyone ends up with the same IPv4 address, but on different ports with port forwarding. Everyone gets their own IPv6 address, but AFAIK, y...I had a relay running on 94.23.214.156. It's an unmetered VPS that is NATed with other VPSes, so everyone ends up with the same IPv4 address, but on different ports with port forwarding. Everyone gets their own IPv6 address, but AFAIK, you can't run a relay without IPv4.
This was fine initially, as my relay just ran on a high-numbered port. [Currently, there are two other relays using the same IP](https://globe.torproject.org/#/search/query=94.23.214.156). This apparently causes the consensus algorithm to flip-flop, keeping any of the relays from becoming stable.
To mitigate this, I've disabled my relay, but this is a less than ideal situation, especially if someone else starts running a relay.
Relevant IRC discussion:
```
<Sebastian> well, this situation totally sucks.
<Sebastian> I think it is a Tor bug, too.
<Sebastian> because the dirauths disagree on who they think should go in the consensus
<Sebastian> so there's flopping
<pipeep> Ouch.
<Sebastian> so of the three relays doing potentially useful things, zero are useful atm
<pipeep> Sebastian, well, I can shut down my relay for now, so at least there won't be any flip-flopping.
<pipeep> And I can contact one of the two other relay operators, and we can decide based on who has the beefier box
* galex-713 has quit (Ping timeout: 480 seconds)
<pipeep> The other one didn't appear to put valid contact information
<Sebastian> that would be nice. You can also file a Tor bug with the information so other people can see that this is an issue
...
<pipeep> Sebastian, what's the issue exactly? That the consensus algorithm is unstable?
<Sebastian> that's one of the issues, the other issue is imo the restriction to two relays/IP itself
```
**Trac**:
**Username**: pipeepTor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/13297compute_weighted_bandwidths() broken for dirauths2020-06-13T14:46:43ZGeorge Kadianakiscompute_weighted_bandwidths() broken for dirauthsI suspect that `compute_weighted_bandwidths()` is broken for dirauths. All the booleans `is_guard`, `is_exit`, etc. are populated according to the `node_t`.
However, `nodelist_set_consensus()` which creates those `node_t`s does not fill...I suspect that `compute_weighted_bandwidths()` is broken for dirauths. All the booleans `is_guard`, `is_exit`, etc. are populated according to the `node_t`.
However, `nodelist_set_consensus()` which creates those `node_t`s does not fill in those fields if we are a dirauth:
```
if (!authdir) {
node->is_valid = rs->is_valid;
node->is_running = rs->is_flagged_running;
node->is_fast = rs->is_fast;
node->is_stable = rs->is_stable;
node->is_possible_guard = rs->is_possible_guard;
...
```
I don't think this has any big implications, but dirauths are probably doing the wrong path selection. Maybe it's more important if someone is doing bwauth measurements using the dirauth code (if that even makes sense).Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/13928Tor Authorities reachability testing is predictable and sequential2020-06-13T14:40:54ZteorTor Authorities reachability testing is predictable and sequentialIn the tor network, all tor authorities test reachability in the same, predictable sequence. Each authority uses the same sequence, and, if started at similar times (a 10 second window ever 1280 seconds), they will start at the same poin...In the tor network, all tor authorities test reachability in the same, predictable sequence. Each authority uses the same sequence, and, if started at similar times (a 10 second window ever 1280 seconds), they will start at the same point. (This is a particular issue with test networks.)
I'd like to randomise the start point and progression of the sequence, while keeping the property that each 1280 second cycle tests all routers.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/13929Increase Authority reachability testing rate with low TestingAuthDirTimeToLea...2020-06-13T14:40:55ZteorIncrease Authority reachability testing rate with low TestingAuthDirTimeToLearnReachabilityIn a TestingTorNetwork, when TestingAuthDirTimeToLearnReachability is much lower than its normal value of 30 minutes, bootstrap will happen much more reliably if we test reachability at a proportionally faster rate.
I'd like to multiply...In a TestingTorNetwork, when TestingAuthDirTimeToLearnReachability is much lower than its normal value of 30 minutes, bootstrap will happen much more reliably if we test reachability at a proportionally faster rate.
I'd like to multiply the number of routers tested every 10 seconds, by the proportion that TestingAuthDirTimeToLearnReachability is smaller than the expected 1280 second cycle length.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/14034Make TestingDirAuthVoteGuard/Exit/HSDir and AssumeReachable less essential in...2020-06-13T14:41:22ZteorMake TestingDirAuthVoteGuard/Exit/HSDir and AssumeReachable less essential in test networksCurrently, we need to use `TestingDirAuthVoteGuard *`, `TestingDirAuthVoteExit *`, and `AssumeReachable 1` to get a test network to bootstrap in under a minute. With #8243, we may need to create a `TestingDirAuthVoteHSDir *` option as we...Currently, we need to use `TestingDirAuthVoteGuard *`, `TestingDirAuthVoteExit *`, and `AssumeReachable 1` to get a test network to bootstrap in under a minute. With #8243, we may need to create a `TestingDirAuthVoteHSDir *` option as well.
These are rather blunt instruments to get boostrap working.
The changes in #13718 and (probably) #13929 ensure that testing networks bootstrap in 30s, without using `TestingDirAuthVoteExit *` or `AssumeReachable 1`. This provides a comprehensive method of testing network / exit bootstrap.
But it would be great to be able to test Guard/HSDir bootstrap too - perhaps by tweaking some settings in the chutney `torrc_templates`, or perhaps by fixing the implementation of one or more of tor's `Testing...` options (i.e. speeding up Guard/HSDir flag assignment in test networks).Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/14150Dirauths should expose the value of `MinUptimeHidServDirectoryV2` as a vote f...2020-06-13T14:41:40ZGeorge KadianakisDirauths should expose the value of `MinUptimeHidServDirectoryV2` as a vote flag-thresholdI think it's important that `MinUptimeHidServDirectoryV2` is a public value for each directory authority, so that we can monitor which authorities have switched to the newest value (it used to be 26 hours, now it's 96 afaik).
I suggest ...I think it's important that `MinUptimeHidServDirectoryV2` is a public value for each directory authority, so that we can monitor which authorities have switched to the newest value (it used to be 26 hours, now it's 96 afaik).
I suggest that this gets exposed in the `flag-thresholds` line of the vote documents. If other people think this is reasonable, it will require an easy torspec patch and a tor patch.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/14267We should be smarter about fetching all missing votes2020-06-13T14:41:59ZNick MathewsonWe should be smarter about fetching all missing votesIf soemthing has gone quite wrong, and we as an authority have no votes, we'll try to fetch every vote from every other authority. That's quite a lot of data! We ran into trouble with this as #14261 , and increased the limit, but the b...If soemthing has gone quite wrong, and we as an authority have no votes, we'll try to fetch every vote from every other authority. That's quite a lot of data! We ran into trouble with this as #14261 , and increased the limit, but the base scenario here isn't so great.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/14354Improve torflow engineering quality and deployment procedure2020-06-13T14:42:05ZGeorge KadianakisImprove torflow engineering quality and deployment procedure**This ticket used to be about improving all dirauth scripts, but now it's specific to torflow.**
From talking to Sebastian and weasel, it seems to me that dirauth operators are having trouble sysadmining all these little dirauth script...**This ticket used to be about improving all dirauth scripts, but now it's specific to torflow.**
From talking to Sebastian and weasel, it seems to me that dirauth operators are having trouble sysadmining all these little dirauth scripts. Furthermore, many of the dirauth operators are not even running scripts like bw measurement, because of the pain of setting them up and supporting them.
With #9321 introducing another script, and with #8244 requiring yet another script. And with the peerflow system that might replace the bw auths, it seems that we will need to find a solution to this problem. Otherwise, only 1-2 dirauth ops (that are also Tor devs) will run each script, which is not good.
Unfortunately, I don't have a very good solution to propose here.
The obviously bad idea would be to bake all these scripts into little-t-tor. But this scales terribly, and we all have hopes for making Tor more modular and this will just be a step backwards.
Another idea that is still not very good but maybe more implementable, is to revisit all these scripts and make them work with minimal setup effort. Then make debian packages that auto-work for all of them (or just a big meta-package), and ask dirauth operators to install them. Then assign someone to be the **maintainer** of all those scripts so that they take care of them when they break or when dirauth ops need help. However, it's unclear how many of these scripts can just auto-work without manual setup or how much Debian hackery that would involve, or whether all dirauth ops use APT-based systems.
At the same time we could make it more clear which dirauths are running which scripts, so that we can incorporate it as part of consensus health and warn dirauths ops that are not running certain scripts or have not updated them. Also, the "make Tor architecture more modular" giga-project might help here, since we could define a custom interface for all these scripts, and make it easier to plug them in Tor without torrc hacks. Also, maybe simply having a nice wiki page with all the current scripts and **good INSTALL instructions** might actually be effective.
What else could we do here that would make dirauths more happy?Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/14763Stochastic Guard Flag2020-06-13T14:42:33ZTracStochastic Guard FlagStochastic Guard Flag symptom and the subsequent interruption of optimal contribution to the network.
Tracking issue with some relays in family randomly losing Guard in consensus, and experiencing other low bandwidth situations sporadic...Stochastic Guard Flag symptom and the subsequent interruption of optimal contribution to the network.
Tracking issue with some relays in family randomly losing Guard in consensus, and experiencing other low bandwidth situations sporadically.
# Not affected (always Guard once Guard):
* Mozilla14 , 209.119.188.42_p80 , globe.torproject.org/#/relay/12259E0A607EE888B23FBFA613C2F99E32408445
* Mozilla4 , 209.119.188.39_p443 , globe.torproject.org/#/relay/629B222746E76B1D531969187EDB9397DEC00838
# Randomish Guard loss affected:
* Mozilla13 , 209.119.188.42_p9090 , globe.torproject.org/#/relay/95AC12EEFD2F89DBE4185E6B5B29ED0CAA5FFFE2
* Mozilla12 , 209.119.188.41_p9090 , globe.torproject.org/#/relay/4DECCBA05C87BF208EA77C81B0BB1278B063884E
* Mozilla11 , 209.119.188.41_p443 , globe.torproject.org/#/relay/07931503E96CBC4284EC04534D586FE63DB70992
* Mozilla10 , 209.119.188.38_p9090 , globe.torproject.org/#/relay/BB1936B7D4F092CE83AE8590CAA07F7B56A7DF1B
* Mozilla9 , 209.119.188.38_p443 , globe.torproject.org/#/relay/57791ADDC8A775A546A2AA8F327C1D2647990162
* Mozilla6 , 209.119.188.40_p443 , globe.torproject.org/#/relay/9B0481C293B26E02994711046798D3D76A126F2E
* Mozilla5 , 209.119.188.40_p9090 , globe.torproject.org/#/relay/C7E8746FE94A8318693F4EA81800149AA6A201C6
* Mozilla2 , 209.119.188.37_p9090 , globe.torproject.org/#/relay/FD3BC0BEA5F73680E6F9F3BAC762160231DC3DB5
Note about traffic graphs when not-Guard: the middle relays appear to be handling plenty of capacity; some delta between usage in Guard or not is based on this type difference. Consider mean consensus weight fraction along with guard / middle probabilities.
Will update once consensus history is reviewed in detail for the period in question...
**Trac**:
**Username**: anonTor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/15237Improve tooling and usability for approved-routers file and its allies2020-06-13T14:44:13ZNick MathewsonImprove tooling and usability for approved-routers file and its alliesI gather from the directory authority operators that the current approved-routers situation is a royal pain. Mistakes can cause authorities to exit; address ranges need to go in one place while fingerprints go in another; different flag...I gather from the directory authority operators that the current approved-routers situation is a royal pain. Mistakes can cause authorities to exit; address ranges need to go in one place while fingerprints go in another; different flags are forced on and off in different ways, there is no 'lint' tool, and nothing is fun.
We should design an improved interface for this, lest we prevail too much upon their patience.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/16182Replacing an older pending vote from this directory (dannenberg.torauth.de)2020-06-13T14:46:31ZRoger DingledineReplacing an older pending vote from this directory (dannenberg.torauth.de)The directory authorities failed to produce a consensus earlier today. Here's our hint:
```
May 25 05:55:11.355 [notice] Replacing an older pending vote from this directory (dannenberg.torauth.de)
```
Why, at the :55:11 mark, was moria...The directory authorities failed to produce a consensus earlier today. Here's our hint:
```
May 25 05:55:11.355 [notice] Replacing an older pending vote from this directory (dannenberg.torauth.de)
```
Why, at the :55:11 mark, was moria1 willing to receive a new vote from dannenberg? This was after I'd published and received signatures, right? So it is way way too late?Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/16255Guardfraction on dirauths screws up bandwidth weights2020-06-13T15:18:11ZGeorge KadianakisGuardfraction on dirauths screws up bandwidth weightsIt seems that dirauths stopped including bandwidth weights on the consensus after Guardfraction got enabled:
https://lists.torproject.org/pipermail/tor-dev/2015-June/008908.html
Looking at weasel's dirauth we get plenty of related error...It seems that dirauths stopped including bandwidth weights on the consensus after Guardfraction got enabled:
https://lists.torproject.org/pipermail/tor-dev/2015-June/008908.html
Looking at weasel's dirauth we get plenty of related error messages:
```
[warn] Bw Weights error 1 for Case 3be (E scarce, Wee=1, Wmd == Wgd) v10. G=14793673 M=8310679 E=3428814 D=7040272 T=26698303 Wmd=-512 Wme=0 Wmg=2192 Wed=11025 Wee=10000 Wgd=-512 Wgg=7808 Wme=0 Wmg=2192 weight_scale=10000
```
That is, it seems like the bandwidth calculation of the dirauths got screwed a bit. This might be the result of using the guardfraction data during bandwidth calculation in `compute_weighted_bandwidths()` that might combine badly with #13297.
The negative Wmd/Wgd values seem weird here as well.
Let's disable the GuardFraction feature for now, till we figure out this issue.Tor: unspecifiedGeorge KadianakisGeorge Kadianakishttps://gitlab.torproject.org/legacy/trac/-/issues/16538Limit the impact of a malicious HSDir2020-06-13T14:47:18ZRoger DingledineLimit the impact of a malicious HSDirAn adversary who can control all six hsdir points for an onion service can censor it. You can observe lookups of it even if you control only some of these six.
So we should raise the bar for getting the HSDir flag, to raise the cost to ...An adversary who can control all six hsdir points for an onion service can censor it. You can observe lookups of it even if you control only some of these six.
So we should raise the bar for getting the HSDir flag, to raise the cost to an adversary who tries the Sybil the network in order to control lots of HSDir points. We should also make it harder to target which onion service your relay becomes the HSDir for.
There's a contradiction here: the more restrictive we are about who gets the HSDir flag, the more valuable it becomes to get it. At the one extreme (our current choice), we give it to basically everybody, so you have to get a lot of them before your attack matters. At the other extreme, we could give it to our favorite 20 relays, and if we choose wisely then basically no adversaries will get the HSDir flag. I suspect there are no sweet spots in between.
This ticket is the parent ticket for all the components of making bad HSDirs less risky.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/16558Dir auths should vote about Invalid like they do about BadExit2020-06-13T14:47:21ZRoger DingledineDir auths should vote about Invalid like they do about BadExitRight now only three dir auths put BadExit in their known-flags, so it takes any 2 of those 3 to give a relay the BadExit flag, which causes an exit relay to not be used by clients for exiting. This is a great convenience for the dir aut...Right now only three dir auths put BadExit in their known-flags, so it takes any 2 of those 3 to give a relay the BadExit flag, which causes an exit relay to not be used by clients for exiting. This is a great convenience for the dir auth operators, since otherwise we'd have to get a majority of all nine (i.e. five) dir auth operators to declare that a relay shouldn't be used for exiting, and we'd be much less agile in response to detected bad behavior.
In comparison, all nine relays put Valid in their known-flags, so it takes a full 5 of the 9 to give a relay the Valid flag -- or said another way, it takes a full 5 of the 9 to take it away.
In the context of malicious HSDir roles, this lack of agility is hurting us. We should explore ways to make !invalid more like !badexit.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/16696BWauth no-consensus fallback logic may need revision2020-06-13T14:47:44ZstarlightBWauth no-consensus fallback logic may need revisionAt present both 'longclaw' and 'maatuska' have
dropped out of the BW consensus ('longclaw'
is restarting with new version, not sure
about 'maatuska').
This has caused the BW consensus logic to revert
to using relay self-measurement for ...At present both 'longclaw' and 'maatuska' have
dropped out of the BW consensus ('longclaw'
is restarting with new version, not sure
about 'maatuska').
This has caused the BW consensus logic to revert
to using relay self-measurement for BW weightings
due to fewer than three BW authorities participating.
The 10000 cap placed on self-measure values
is causing super-fast relays serious demotion
and slower relays corresponding promotion
in the consensus weighting.
Possibly this may result in network
unbalance issues. Some adjustment
of the logic seems in order.
Unsure of component and left it unset.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/16849clear_status_flags_on_sybil might want to clear more flags2020-06-13T14:48:23Zteorclear_status_flags_on_sybil might want to clear more flagsclear_status_flags_on_sybil contains a comment saying "it's easy to add a new flag but forget to add it to this clause."
It looks like we may have forgot the following flags:
* is_hs_dir
* version_known?
* version_supports_extend2_cells...clear_status_flags_on_sybil contains a comment saying "it's easy to add a new flag but forget to add it to this clause."
It looks like we may have forgot the following flags:
* is_hs_dir
* version_known?
* version_supports_extend2_cells?
* has_bandwidth
* has_exitsummary?
* bw_is_unmeasured? (set to 1?)
* bandwidth_kb
* has_guardfraction
* guardfraction_percentage
To deal with the root cause, should we instead zero out the entire `routerstatus_t`, then copy the fields we need back in?
(This would zero new fields on sybils by default.)
We could also implement a unit test for clear_status_flags_on_sybil that checks that certain (important?) flags are cleared, or that all flags are cleared (?).Tor: unspecifiedffmanceraffmancerahttps://gitlab.torproject.org/legacy/trac/-/issues/16978Minority of hostile dirauths can influence consensus in dangerous ways2020-06-13T14:48:46ZSebastian HahnMinority of hostile dirauths can influence consensus in dangerous waysWe like to claim that if a minority of dirauths is not honest, the worst they can do is manipulate the voting process in such a way that no consensus emerges but not that a consensus emerges that is (at least partially) dictated by the b...We like to claim that if a minority of dirauths is not honest, the worst they can do is manipulate the voting process in such a way that no consensus emerges but not that a consensus emerges that is (at least partially) dictated by the bad actors. Unfortunately, this isn't the case for the opt-in features. If a majority of the dirauths opting in to features such as bad exit voting, bandwidth measurements, or voting for a specific parameter want to influence these values in the consensus, they don't require a majority of total dirauths to do that. This might not be so much of an issue with less important features like Naming, but since badexit and bandwidth weight directly influences path selection on the client, these authorities that opt in to those features have considerably more power over the consensus than those that do not.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/17274Some kind of append-only log for consensus documents and votes2020-06-13T14:50:09ZNick MathewsonSome kind of append-only log for consensus documents and votes
Our roadmap says this would be a good idea for January 2016, but it seems pretty huge. At least a completed design document would be nice.
Our roadmap says this would be a good idea for January 2016, but it seems pretty huge. At least a completed design document would be nice.Tor: unspecifiedLinus Nordberglinus@torproject.orgLinus Nordberglinus@torproject.orghttps://gitlab.torproject.org/legacy/trac/-/issues/17275Package directory authority scripts for debian in compliant packages2020-06-13T16:13:36ZNick MathewsonPackage directory authority scripts for debian in compliant packagesIn order to reduce the difficulty of being an authority, we should make (compliant) packages for the scripts that we hope authorities will run. This will (ideally) improve the code quality and usability of these scripts.
The scripts in...In order to reduce the difficulty of being an authority, we should make (compliant) packages for the scripts that we hope authorities will run. This will (ideally) improve the code quality and usability of these scripts.
The scripts include:
* guard fraction
* bandwidth authority
* bad exit finder
* probably more!Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/17605Stop HTTP caches storing or modifying X-Your-Address-Is from Tor Directory do...2020-06-13T14:51:03ZteorStop HTTP caches storing or modifying X-Your-Address-Is from Tor Directory documentsSome web caches (such as Farahavar's previous cache), pass on the X-Your-IP-Address-Is header from one directory document to multiple clients. This causes the clients to guess the wrong IP address as their address.
I think we should add...Some web caches (such as Farahavar's previous cache), pass on the X-Your-IP-Address-Is header from one directory document to multiple clients. This causes the clients to guess the wrong IP address as their address.
I think we should add one or more of the following headers to every directory response:
`Pragma: no-cache` tells HTTP 1.0 compliant caches to disable caching entirely. (This will also disable caching for HTTP 1.1 caches unless we provide a more generous Cache-Control header, like the one below.)
`Connection: close X-Your-IP-Address-Is` tells HTTP 1.1 caches to never send out the X-Your-IP-Address-Is header, even to the first client requesting the document.
`Cache-Control: no-cache="X-Your-IP-Address-Is"` tells HTTP 1.1 caches to not cache the header at all. Alternately, if the cache doesn't support the no-cache="<header-name>" feature, it tells the cache not to cache the entire document. (This also causes the cache to attempt to revalidate the header, which might not be what we want, as Tor doesn't support cache revalidation.)
I don't know enough about how caches typically behave to recommend which ones.
See:
* #16205 - bogus IP address / clock change from authority server
* https://lists.torproject.org/pipermail/tor-relays/2015-November/008137.htmlTor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/18295Make shared random rounds configurable in test networks2020-06-13T14:54:14ZteorMake shared random rounds configurable in test networksFrom #16943:
Replying to [dgoulet](#note_23):
> Replying to [teor](#note_22):
> > A hard-coded `SHARED_RANDOM_N_ROUNDS` is going to make it really hard to test hidden services quickly using chutney. (We'll always be testing them using t...From #16943:
Replying to [dgoulet](#note_23):
> Replying to [teor](#note_22):
> > A hard-coded `SHARED_RANDOM_N_ROUNDS` is going to make it really hard to test hidden services quickly using chutney. (We'll always be testing them using the default initial shared random value.) Can we make this configurable in test networks?
> > {{{
> > #define SHARED_RANDOM_N_ROUNDS 12
> > }}}
>
> The part I do not like about changing this value for testing network is that we do NOT get the real behavior of the protocol... I'm not against for a testing value but I would do that after merge in a separate ticket.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/18321Exclude our own vote from the consensus if we think our own vote is invalid2020-06-13T14:54:23ZteorExclude our own vote from the consensus if we think our own vote is invalidWe're creating a vote that is invalid, but try to make a consensus anyway like nothing's wrong. Then we fail doing that as described above.We're creating a vote that is invalid, but try to make a consensus anyway like nothing's wrong. Then we fail doing that as described above.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/18641Teach the OOM handler about uploaded descriptors on a dirauth.2020-06-13T14:55:39ZNick MathewsonTeach the OOM handler about uploaded descriptors on a dirauth.The OOM handler should know to do something with the descriptors that a dirauth has received via upload.The OOM handler should know to do something with the descriptors that a dirauth has received via upload.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/18938Authorities should reject non-UTF-8 content in ExtraInfo descriptors2020-06-13T14:59:26ZteorAuthorities should reject non-UTF-8 content in ExtraInfo descriptorsIn #18656, we discovered that authorities don't validate that ExtraInfo descriptors are printable ASCII before accepting them.
Authorities (and HSDirs) should check every ~~directory~~ extrainfo document they receive consists only of ~~...In #18656, we discovered that authorities don't validate that ExtraInfo descriptors are printable ASCII before accepting them.
Authorities (and HSDirs) should check every ~~directory~~ extrainfo document they receive consists only of ~~"printing ASCII"~~ UTF-8, as defined in ~~torspec...~~ prop285:
https://gitweb.torproject.org/torspec.git/tree/proposals/285-utf-8.txt
~~I've heard others say that the following lines allow non-ASCII content, but I'm not sure if that's actually the case, and if it is, how many relays this would affect:~~
* ~~the "platform" line in relay descriptors, which is a "human-readable string",~~
* ~~the contact "info" line in relay descriptors, which has an undefined format.~~
Edit: allowing users to spell their names correctly is important. That's why we'll use utf-8 for relay descriptors, votes, and consensuses.
~~If it is, I'd recommend we make them all ASCII for consistency, and update torspec to clarify, and include it as a "major" change in an 0.2.x tor release.~~
~~(This means that some users will be unable to spell their names correctly. But there was never any guarantee that 8-bit characters in "info" would be interpreted as users intended. I think security is more important here.)~~Tor: unspecifiedNeel Chauhanneel@neelc.orgNeel Chauhanneel@neelc.orghttps://gitlab.torproject.org/legacy/trac/-/issues/19011Use of maxunmeasuredbw and bwweightscale is broken in consensus2020-06-13T14:57:11ZNick MathewsonUse of maxunmeasuredbw and bwweightscale is broken in consensusWhile refactoring, I noticed this code in dirvote.c:
```
if (params) {
if (strcmpstart(params, "bwweightscale=") == 0)
bw_weight_param = params;
else
bw_weight_param = strstr(params, " bwweightscale=");
...While refactoring, I noticed this code in dirvote.c:
```
if (params) {
if (strcmpstart(params, "bwweightscale=") == 0)
bw_weight_param = params;
else
bw_weight_param = strstr(params, " bwweightscale=");
}
if (bw_weight_param) {
int ok=0;
char *eq = strchr(bw_weight_param, '=');
if (eq) {
weight_scale = tor_parse_long(eq+1, 10, 1, INT32_MAX, &ok,
NULL);
if (!ok) {
log_warn(LD_DIR, "Bad element '%s' in bw weight param",
escaped(bw_weight_param));
weight_scale = BW_WEIGHT_SCALE;
}
} else {
log_warn(LD_DIR, "Bad element '%s' in bw weight param",
escaped(bw_weight_param));
weight_scale = BW_WEIGHT_SCALE;
}
}
```
Looking at the use of tor_parse_ulong(). Since "next" is NULL, any unconverted characters should make it give an error, making us use the default value.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/19033Fuzz out of bounds reads during nodelist processing2020-06-13T14:57:18ZteorFuzz out of bounds reads during nodelist processingWe want to make sure we fixed all the issues with nodelist processing in #19032.
arma says this could be SponsorS or SponsorU.We want to make sure we fixed all the issues with nodelist processing in #19032.
arma says this could be SponsorS or SponsorU.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/19045Keep trying to form a new shared random value during the next commit phase2020-06-13T14:57:26ZteorKeep trying to form a new shared random value during the next commit phaseCurrently, the shared random system treats the first vote of the next commit phase specially - it's the only time it tries to agree on a new shared random value.
But we can try to agree 12 times on a new shared random value like this:
*...Currently, the shared random system treats the first vote of the next commit phase specially - it's the only time it tries to agree on a new shared random value.
But we can try to agree 12 times on a new shared random value like this:
* for the first consensus in the new commit period:
* vote the calculated shared random value;
* for subsequent consensuses in the new commit period:
* if we have an agreed shared random value from a trusted, previous consensus in the period, vote that value;
* if not, (if the new shared random value is missing from the consensus, or there is no trusted consensus), continue to vote our calculated value.
This way, we try up to 12 times to agree on a shared random value. (But we never change an agreed value after we've agreed on it.)Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/19162Make it even harder to become HSDir2020-06-13T14:57:52ZGeorge KadianakisMake it even harder to become HSDirIn #8243 we started requiring `Stable` flag for becoming HSDirs, but this is still not hard enough for motivated adversaries. Hence we need to make it even harder for a relay to become HSDir, so that only relays that have been around for...In #8243 we started requiring `Stable` flag for becoming HSDirs, but this is still not hard enough for motivated adversaries. Hence we need to make it even harder for a relay to become HSDir, so that only relays that have been around for long get the flag. After prop224 gets deployed, there will be less incentive for adversaries to become HSDirs since they won't be able to harvest onion addresses.
Until then, our current plan is to increase the bandwidth and uptime required to become an HSDir to something almost unreasonable. For example requiring an uptime of over 6 months, or maybe requiring that the relay is in the top 1/4th of uptimes on the network.Tor: unspecifiedRoger DingledineRoger Dingledinehttps://gitlab.torproject.org/legacy/trac/-/issues/19179Refactor functions that handle 'packages' in consensus/votes2020-06-13T14:58:00ZGeorge KadianakisRefactor functions that handle 'packages' in consensus/votesThis is a side issue of #18840.
The code managing packages for consensuses and votes seems to be of particularly low quality.
See compute_consensus_package_lines() doing ad-hoc parsing. And see validate_recommended_package_line() doing...This is a side issue of #18840.
The code managing packages for consensuses and votes seems to be of particularly low quality.
See compute_consensus_package_lines() doing ad-hoc parsing. And see validate_recommended_package_line() doing more ad-hoc parsing and having wrong return value patterns. Fortunately, both of them are weakly tested in the unittests.
Maybe we should refactor and add more testing for these funcs?Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/19304Write a proposal for having dirauths push to fallbacks, rather than pull.2020-06-13T14:58:22ZNick MathewsonWrite a proposal for having dirauths push to fallbacks, rather than pull.This will require some kind of chatter mechanism for fallbacks to circulate documents.This will require some kind of chatter mechanism for fallbacks to circulate documents.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/19305Write a proposal for separating "upload descriptors here" from the rest of wh...2020-06-13T14:58:23ZNick MathewsonWrite a proposal for separating "upload descriptors here" from the rest of what dirauths do.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/19306Write a proposal for removing liveness-testing from dirauths.2020-06-13T14:58:23ZNick MathewsonWrite a proposal for removing liveness-testing from dirauths.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/19507tor and tor-gencert disagree on what a month is2020-06-13T14:59:02Zweasel (Peter Palfrader)tor and tor-gencert disagree on what a month isIf I create a new authority-signing-key on June 1st at 00:00 with a life time of 5 months using tor-gencert, then the new authority signing certificate will expire November 1st at 00:00.
If I create a new identity signing key on June 1s...If I create a new authority-signing-key on June 1st at 00:00 with a life time of 5 months using tor-gencert, then the new authority signing certificate will expire November 1st at 00:00.
If I create a new identity signing key on June 1st at 00:00 with a life time of 5 months using tor, then the new identity signing cert will expire October 31st at 06:00.
Obviously this disagreement is suboptimal.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/19570Shared random round gets out of sync2020-06-13T14:59:18ZteorShared random round gets out of syncI have two test directory authorities which disagree on which round it is. I think it's because one is voting every half hour (due to consensus failure), and the other is voting every hour. It could also be due to their start times.
I s...I have two test directory authorities which disagree on which round it is. I think it's because one is voting every half hour (due to consensus failure), and the other is voting every hour. It could also be due to their start times.
I started this test directory authority at 00:54:30 UTC
(Log times UTC+10)
`Jul 05 12:00:01.000 [info] sr_state_update(): SR: State prepared for upcoming voting period (2016-07-05 03:00:00). Upcoming phase is commit (counters: 3 commit & 0 reveal rounds).`
I started another test directory authority at 01:47:52 UTC
(Log times are UTC)
`Jul 05 02:00:01.000 [info] sr_state_update(): SR: State prepared for upcoming voting period (2016-07-05 03:00:00). Upcoming phase is commit (counters: 2 commit & 0 reveal rounds).`
Is this just a logging / counting issue, or a serious bug that could affect the consensus?Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/19656Shared random state doesn't expire when clock changes?2020-06-13T14:59:27ZteorShared random state doesn't expire when clock changes?My test directory authority Evelyn was offline / asleep for a day or so.
When it came back online, it seemed to have state from a previous shared random round:
(Log times are UTC+10)
```
Jul 09 20:52:31.000 [info] should_keep_commit(): ...My test directory authority Evelyn was offline / asleep for a day or so.
When it came back online, it seemed to have state from a previous shared random round:
(Log times are UTC+10)
```
Jul 09 20:52:31.000 [info] should_keep_commit(): SR: Received altered commit from 4CAEC248004A0DC6CE86EBD5F608C9B05500C70C in commit phase.
Jul 09 20:52:31.000 [info] should_keep_commit(): SR: Received altered commit from 5604E1632E4583D6A43C6A56C16412228E0AF12A in commit phase.
Jul 09 20:52:31.000 [info] should_keep_commit(): SR: Received altered commit from 587B958421F6952069C5853AF42F5A466DC6AD16 in commit phase.
Jul 09 20:52:31.000 [info] should_keep_commit(): SR: Received altered commit from B058CA27CFCF0BDF09119759463C7607EE0C1CC1 in commit phase.
Jul 09 20:52:31.000 [info] should_keep_commit(): SR: Received altered commit from E11AA24864B3CD504EF951090C351DBD96BE68F4 in commit phase.
Jul 09 20:52:31.000 [info] should_keep_commit(): SR: Received altered commit from ED8ABBE6A11336AE711926FD7C98948AB9FE96D7 in commit phase.
```Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/19777tor-gencert should warn nicely when PEM passphrases are too short2020-06-13T14:59:49ZIsis Lovecrufttor-gencert should warn nicely when PEM passphrases are too shortIf you do `$ ./src/tools/tor-gencert --create-identity-key` and then give a horribly insecure passphrase like "tor" as the passphrase to the PEM certificate, tor-gencert will give this rather cryptic error message:
```
Jul 28 18:46:45.7...If you do `$ ./src/tools/tor-gencert --create-identity-key` and then give a horribly insecure passphrase like "tor" as the passphrase to the PEM certificate, tor-gencert will give this rather cryptic error message:
```
Jul 28 18:46:45.709 [err] Couldn't write identity key to ./authority_identity_key
Jul 28 18:46:45.710 [err] crypto error while Writing identity key: problems getting password (in PEM routines:PEM_def_callback)
Jul 28 18:46:45.710 [err] crypto error while Writing identity key: read key (in PEM routines:DO_PK8PKEY)
```
It would be nice if instead it just said "I require a passphrase with a minimum of 8 characters!" or something like that.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/20055Remove relays that fail to rotate onion keys from the consensus2020-06-13T15:04:23ZteorRemove relays that fail to rotate onion keys from the consensusOn #7164, a cypherpunks notes that ~40 relays fail to rotate their onion keys. This should be addressed by identifying these relays, and adding them to the DirAuths' AuthDirInvalid or AuthDirReject lists.
First, we need to update torspe...On #7164, a cypherpunks notes that ~40 relays fail to rotate their onion keys. This should be addressed by identifying these relays, and adding them to the DirAuths' AuthDirInvalid or AuthDirReject lists.
First, we need to update torspec/dir-spec.txt to say that relays SHOULD rotate their onion keys every 7 days, and MUST rotate them every N days. (I suggest 14 or 28.)
Then we can modify DocTor to check for relays in the consensus that have had the same onion key for N days. (I think DocTor is the right place for this check.)
This won't catch cases where relays repeat onion keys, but it will suffice to catch the most obvious misconfiguration - a read-only onion key file.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/20165When a relay advertises a new, unreachable address, OR reachability can succe...2020-06-13T15:01:32ZteorWhen a relay advertises a new, unreachable address, OR reachability can succeed via the old addressIf a relay has advertised a reachable address in the past, and continues listening on the old address, clients and relays will continue to contact Tor on that address for a few hours.
If the relay starts advertising a new, unreachable a...If a relay has advertised a reachable address in the past, and continues listening on the old address, clients and relays will continue to contact Tor on that address for a few hours.
If the relay starts advertising a new, unreachable address, ORPort reachability will appear to succeed for that new address, because Tor doesn't (and probably can't) check the address clients are connecting to is the one it actually advertised.
And Tor doesn't do ongoing reachability checks, so it publishes its descriptor based on the mistaken reachability, and assumes everthing is OK from then on.
Fortunately, the mandatory DirPort check catches this in 0.2.8 and later.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/20272constraint broken in case 1 of consensus weight calculation2020-06-13T15:01:52Zpastlyconstraint broken in case 1 of consensus weight calculation[dir-spec](https://gitweb.torproject.org/torspec.git/tree/dir-spec.txt#n2648) specifies the constraint `Wmg == Wmd` in case 1, but also that
```
Wmg = (weight_scale*(2*G-E-M))/(3*G)
Wmd = weight_scale/3
```
This constraint is impossibl...[dir-spec](https://gitweb.torproject.org/torspec.git/tree/dir-spec.txt#n2648) specifies the constraint `Wmg == Wmd` in case 1, but also that
```
Wmg = (weight_scale*(2*G-E-M))/(3*G)
Wmd = weight_scale/3
```
This constraint is impossible to satisfy unless it just happens that `(2G-E-M)/G == 1`.
Indeed, in my testing of `networkstatus_compute_bw_weights_v10`, I found that `Wmg` and `Wmd` were calculated as above, but the constraint was ignored.
The easy solution is to change the spec, but that would ignore the logic that went into having that constraint in the first place. I do not know the logic that went into designing the consensus weight calculations, so I do not know if this solution is appropriate.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/20284consensus weight case 2b3 does not follow dir-spec2020-07-31T12:42:39Zpastlyconsensus weight case 2b3 does not follow dir-spec[dir-spec](https://gitweb.torproject.org/torspec.git/tree/dir-spec.txt#n2681) says the following.
```
If M > T/3, then the Wmd weight above will become negative. Set it to 0
in this case:
Wmd = 0
Wgd = weight_scale - Wed
```
The ...[dir-spec](https://gitweb.torproject.org/torspec.git/tree/dir-spec.txt#n2681) says the following.
```
If M > T/3, then the Wmd weight above will become negative. Set it to 0
in this case:
Wmd = 0
Wgd = weight_scale - Wed
```
The code dutifully sets `Wmd` to 0, but neglects `Wgd`.
I assume the spec is correct and the intended behavior. Branch incoming once I get a ticket number.Tor: unspecifiedpastlypastlyhttps://gitlab.torproject.org/legacy/trac/-/issues/20285can't create valid case 2b3 consens weight calculation2020-06-13T15:01:55Zpastlycan't create valid case 2b3 consens weight calculationEven if #20284 is fixed, I still can come up with values that produce a `Wed` that is too large. Maybe I'm not trying hard enough, but I can't get case 2b3 to execute successfully.
For example, let
```
M=80
E=20
G=30
D=10
T=M+E+G+D
``...Even if #20284 is fixed, I still can come up with values that produce a `Wed` that is too large. Maybe I'm not trying hard enough, but I can't get case 2b3 to execute successfully.
For example, let
```
M=80
E=20
G=30
D=10
T=M+E+G+D
```
In case 2b2, `Wed = (weight_scale*(D - 2*E + G + M))/(3*D) = 26667`. That's bigger than `weight_scale`. It (and `Wmd`) trigger case 2b3, which doesn't do anything about a too large `Wed` and thus `networkstatus_check_weights()` fails.
I admit I don't know how reasonable the values are that I came up with above. I am writing test cases so #14881 can be closed though, and just about any weird combination should be handled without failing. Right?
I don't know what the correct resolution is, so not patch/branch incoming at this time.Tor: unspecified