Trac issueshttps://gitlab.torproject.org/legacy/trac/-/issues2020-06-13T14:02:42Zhttps://gitlab.torproject.org/legacy/trac/-/issues/1102Queuing v3 signature for next consensus, an hour later?2020-06-13T14:02:42ZRoger DingledineQueuing v3 signature for next consensus, an hour later?On moria1, which I started at
Sep 21 01:51:04.434 (after some parts of the consensus generation were
supposed to start)
Sep 21 01:51:47.809 [notice] Uploaded a vote to dirserver 128.31.0.34:9031
Sep 21 01:51:47.832 [notice] Uploaded a v...On moria1, which I started at
Sep 21 01:51:04.434 (after some parts of the consensus generation were
supposed to start)
Sep 21 01:51:47.809 [notice] Uploaded a vote to dirserver 128.31.0.34:9031
Sep 21 01:51:47.832 [notice] Uploaded a vote to dirserver 216.224.124.114:9030
Sep 21 01:51:47.833 [notice] Uploaded a vote to dirserver 208.83.223.34:443
Sep 21 01:51:48.045 [notice] Uploaded a vote to dirserver 86.59.21.38:80
Sep 21 01:51:48.311 [notice] Uploaded a vote to dirserver 194.109.206.212:80
Sep 21 01:51:49.618 [notice] Uploaded a vote to dirserver 213.73.91.31:80
Sep 21 01:51:49.662 [notice] Uploaded a vote to dirserver 80.190.246.100:80
...
Sep 21 01:52:31.466 [notice] Time to fetch any votes that we're missing.
Sep 21 01:52:31.466 [notice] We're missing votes from 6 authorities. Asking every other authority for a copy.
...
Sep 21 01:55:01.379 [notice] Time to compute a consensus.
Sep 21 01:55:01.586 [notice] Consensus computed; uploading signature(s)
Sep 21 01:55:01.587 [notice] Signature(s) posted.
Sep 21 01:55:01.611 [notice] Got a signature from 128.31.0.34. Adding it to the pending consensus.
Sep 21 01:55:01.612 [notice] Uploaded signature(s) to dirserver 128.31.0.34:9031
Sep 21 01:55:01.763 [notice] Uploaded signature(s) to dirserver 216.224.124.114:9030
Sep 21 01:55:01.770 [notice] Uploaded signature(s) to dirserver 208.83.223.34:443
Sep 21 01:55:01.846 [notice] Uploaded signature(s) to dirserver 86.59.21.38:80
Sep 21 01:55:01.854 [notice] Got a signature from 86.59.21.38. Adding it to the pending consensus.
Sep 21 01:55:01.930 [notice] Uploaded signature(s) to dirserver 194.109.206.212:80
Sep 21 01:55:01.934 [notice] Got a signature from 194.109.206.212. Adding it to the pending consensus.
Sep 21 01:55:02.827 [notice] Got a signature from 208.83.223.34. Adding it to the pending consensus.
Sep 21 01:55:02.869 [notice] Got a signature from 216.224.124.114. Adding it to the pending consensus.
Sep 21 01:55:05.121 [notice] Got a signature from 213.73.91.31. Adding it to the pending consensus.
Sep 21 01:55:05.675 [notice] Uploaded signature(s) to dirserver 213.73.91.31:80
Sep 21 01:55:08.879 [notice] Got a signature from 80.190.246.100. Adding it to the pending consensus.
Sep 21 01:55:09.307 [notice] Uploaded signature(s) to dirserver 80.190.246.100:80
Sep 21 01:57:31.840 [notice] Time to fetch any signatures that we're missing.
Sep 21 02:00:01.204 [notice] Time to publish the consensus and discard old votes
Sep 21 02:00:01.231 [notice] Choosing expected valid-after time as 2009-09-21 07:00:00: consensus_set=1, interval=3600
Sep 21 02:00:01.300 [notice] Consensus published.
Sep 21 02:00:01.301 [notice] Choosing expected valid-after time as 2009-09-21 07:00:00: consensus_set=1, interval=3600
Sep 21 02:00:09.474 [notice] Got a signature from 38.229.70.2. Queuing it for the next consensus.
It's that last line that concerns me. Queuing for the next consensus that's 59 minutes
and 50 seconds from now? Shouldn't we either be adding it to the current consensus even
though it's late, or discarding it because it's late?
(Note that this isn't from an authority that moria1 recognizes)
[Automatically added by flyspray2trac: Operating System: All]Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/1116'Stable' flag assignment inconsistant2020-06-13T14:02:47ZTom Lowenthal'Stable' flag assignment inconsistantLooking at a consensus document [though I used torstatus.all.de for ease of sorting data] it seems that the 'stable' flag
is not being consistently assigned.
According to the v3 directory specification at https://git.torproject.org/che...Looking at a consensus document [though I used torstatus.all.de for ease of sorting data] it seems that the 'stable' flag
is not being consistently assigned.
According to the v3 directory specification at https://git.torproject.org/checkout/tor/master/doc/spec/dir-spec.txt ,
routers with a weighted MTBF more than either the median or seven days should be marked stable, and MTBF data more
than a month old shouldn't be that relevant when assigning the flag. Since the median uptime is about 3 days, one should
roughly expect that any router with more than 30 days of uptime (and which are still valid) should have the stable flag.
However when relays are sorted in order of uptime, several apparently-longrunning routers do not have the flag.
Since this data is liable to change as relays go up an down, here are some noted not-'stable' routers at the time of
writing. The routers have uptimes more than a month, so their (correctly) weighted MTBF should certainly be more than
a week, and more than the median, about three days.
wie6ud6be - 148d
anonymde - 112d
torpfaffenederorg - 110d
rentalsponge - 70d
xhyG5r96QGlRqL - 57d
niugnip - 56d
oeiwuqej - 49d
gremlin - 42d
editingconfigishard - 39d
[Automatically added by flyspray2trac: Operating System: All]Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/1238Exit flag assigned can be assigned to nodes that don't really exit.2020-06-13T14:03:42ZSebastian HahnExit flag assigned can be assigned to nodes that don't really exit.The router b0red is flagged as Exit, even though its Exit policy doesn't allow any exits.
Discovered by "dun" on #tor.
This is currently part of the consensus:
```
r b0red WCi6nB/t0u9ZtGBcrrWFgpXdjlg w+3Dl7l2fnUc0JhSMLchCL7RcjU 2010-0...The router b0red is flagged as Exit, even though its Exit policy doesn't allow any exits.
Discovered by "dun" on #tor.
This is currently part of the consensus:
```
r b0red WCi6nB/t0u9ZtGBcrrWFgpXdjlg w+3Dl7l2fnUc0JhSMLchCL7RcjU 2010-02-02 00:21:48 80.190.250.90 443 80
s Exit Fast Guard HSDir Named Running Stable V2Dir Valid
v Tor 0.2.1.20
w Bandwidth=621
p reject 1-65535
```
descriptor:
```
@downloaded-at 2010-01-31 23:16:54
@source "194.109.206.212"
router b0red 80.190.250.90 443 0 80
platform Tor 0.2.1.20 on Linux i686
opt protocols Link 1 2 Circuit 1
published 2010-01-31 12:20:43
opt fingerprint 5828 BA9C 1FED D2EF 59B4 605C AEB5 8582 95DD 8E58
uptime 5097747
bandwidth 5242880 10485760 261098
opt extra-info-digest 535CE872B386F71E9DEA356B10E63E9D83789F57
onion-key
-----BEGIN RSA PUBLIC KEY-----
MIGJAoGBAM2wCZqUMEgPDdEsVrW1XfHrvqmOT1KYDMupz7h+DA5b56VMPOIyOG57
hKGliyW5gE7B/Qtt5EtasScqAFM+kV9BVXWVshFEF4tu2kWdFS8E4XKVks0NbTUU
2H/l0W/H2KdMy1bUuWyd7s1ftcuodb04Na3U/DS0t26Ta1kADWLZAgMBAAE=
-----END RSA PUBLIC KEY-----
signing-key
-----BEGIN RSA PUBLIC KEY-----
MIGJAoGBANB7P5x+7SON1dd2RkuqjNZaPsSPKoGKIOuq1IwSNDJR8+Y7T7jijgWe
ZKzvieP82XK1eDxKTdXCJbWR1X+V5a5XExt8RNszeslK02bC+Q4wTUtlM7n3319Q
UQrLTp++dVLa0LuNvlbux39tqAqriyn0hWI2JVEbkrp32N4l28SFAgMBAAE=
-----END RSA PUBLIC KEY-----
opt hidden-service-dir
opt allow-single-hop-exits
contact xxoes <xxoes at b0red.de>
reject 0.0.0.0/8:*
reject 169.254.0.0/16:*
reject 127.0.0.0/8:*
reject 192.168.0.0/16:*
reject 10.0.0.0/8:*
reject 172.16.0.0/12:*
reject 80.190.250.90:*
reject *:1-65534
reject *:65535
accept *:*
router-signature
-----BEGIN SIGNATURE-----
SVmtJeKcTUVyaZO8PfKtd0E1yQUR+TffgNo5AAgPOGLdjqmbIpFA2RqsfFqXK2Re
PQ34TxbgMKGxfZKDVXAfeQFVVQgFny8KqAlzDfytFUxOGvdcthHsfg/FJwbPneNU
eiNdn4E+ug8JjOcAKJ7EdfhmIKaWRXAg2NKZKWbNnRQ=
-----END SIGNATURE-----
```Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/1291Relays that aren't Valid never get Running2020-06-13T14:04:01ZRoger DingledineRelays that aren't Valid never get RunningWe use router_is_active() for too many checks when directory authorities are
deciding how to handle relays that don't have the Valid flag.
Once upon a time, you could be missing a Valid flag and still get the Running
flag. That would ca...We use router_is_active() for too many checks when directory authorities are
deciding how to handle relays that don't have the Valid flag.
Once upon a time, you could be missing a Valid flag and still get the Running
flag. That would cause clients to avoid using you except in circuit positions
specified in their 'AllowInvalidRelays' config option.
At present if we take away your Valid flag, we also necessarily take away your
Running flag.
We should sort out what we want to do. I think there is still a role for having
"dangerous" relays -- meaning you don't use them at the beginning or the end of your
path.
Maybe this means we should do away with the 'Valid' flag, and add a !badguard along
with !badexit?
[Automatically added by flyspray2trac: Operating System: All]Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/1690Consensus Bandwidth Lacks Indication of Type2020-06-13T14:05:08ZDamian JohnsonConsensus Bandwidth Lacks Indication of TypeOn the client side there currently isn't a way of telling what type of measurement was used for the bandwidth value. For instance if it reads "w Bandwidth=65700" there's no way to definitively tell if this is observed, measured, or weigh...On the client side there currently isn't a way of telling what type of measurement was used for the bandwidth value. For instance if it reads "w Bandwidth=65700" there's no way to definitively tell if this is observed, measured, or weighted measured.Tor: 0.2.4.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/2282Publish router descriptors rejected by the authorities or omitted from the co...2020-06-13T14:07:47ZRobert RansomPublish router descriptors rejected by the authorities or omitted from the consensusRight now, if a relay is dropped from the consensus, or its descriptor is rejected outright by the directory authorities, we won't find out that it has happened unless someone notices that their relay isn't working and tells us, and we c...Right now, if a relay is dropped from the consensus, or its descriptor is rejected outright by the directory authorities, we won't find out that it has happened unless someone notices that their relay isn't working and tells us, and we can't find out why it happened unless we read the directory authorities' log files.
The directory authorities should:
* archive _all_ descriptors that are published to them, even if they are rejected or not included in the consensus;
* if a descriptor is rejected, record the reason in that archive; and
* if a relay is omitted from the consensus, record the reason in the archive.
The directory authority operators should:
* examine a sample of the descriptors that are not included in the consensus, for whatever reason;
* if the descriptors in the sample do not contain particularly sensitive information, begin publishing these otherwise unpublished descriptors.
Having this information available would make it easier to find relays that were disabled by #2204 and inform their operators that they need to upgrade Tor, for example.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/2473Develop a design to support multiple bridge authorities2020-06-13T14:08:30ZRoger DingledineDevelop a design to support multiple bridge authoritiesThe main thing blocking multiple bridge directory authorities right now is that we don't have a design for how it would work. For the normal directory authority design, we want all of them to know about all relays. But for bridge authori...The main thing blocking multiple bridge directory authorities right now is that we don't have a design for how it would work. For the normal directory authority design, we want all of them to know about all relays. But for bridge authorities, that would defeat the purpose. So we want some algorithms for distributing bridges over authorities, such that bridge users know where to go to look up a given bridge (probably as a function of its identity fingerprint). Perhaps the algorithm should provide stable answers even as we change the set of bridge authorities, and for clients and bridges running a variety of Tor versions. More generally, we need to figure out what functionality we want and what security properties we should shoot for.
Somebody should start with a proposal, and go from there.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/2664DoS and failure resistence improvements2020-06-13T14:09:04ZMike PerryDoS and failure resistence improvementsWe just had a near-catastrophe today when an IPv6 relay descriptor took out all of the Tor directory authorities. It took us ~10hrs to correct this issue. The maximum we had before the network breaks for everyone is 28hrs. We need to con...We just had a near-catastrophe today when an IPv6 relay descriptor took out all of the Tor directory authorities. It took us ~10hrs to correct this issue. The maximum we had before the network breaks for everyone is 28hrs. We need to consider implementing some procedures to both reduce the amount of turnaround time it takes to diagnose and fix cases like this, and also enhance the network's ability to function if we can't bring the authorities back online within 28hrs.
This ticket is the parent ticket for a series of child tickets that have been created to remind us to create actual proposals and procedures.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/2665Create a dirauth DoS response procedure2020-06-13T14:22:29ZMike PerryCreate a dirauth DoS response procedureWe have the technical ability right now to rapidly rotate up to n-1 of the directory authorities to new IP addresses and new intermediate keys, simply by updating torrc files of dirauths. So long as at least one directory authority remai...We have the technical ability right now to rapidly rotate up to n-1 of the directory authorities to new IP addresses and new intermediate keys, simply by updating torrc files of dirauths. So long as at least one directory authority remains listening on its old IP address and is aware of the other directory authorities' new locations, it should still be possible to both produce a consensus and distribute it to new clients.
We should clearly document this procedure so we can execute it quickly if a majority of the Tor directory authorities fall victim to a DoS or compromise.
We should also consider altering client bundles to ship with a reduced consensus or descriptor set of ultra high-uptime directory mirrors, so that in the future we can rotate all n directory authorities without issue.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/2693Design and implement improved algorithm for choosing consensus method2020-06-13T14:09:14ZNick MathewsonDesign and implement improved algorithm for choosing consensus methodOur current algorithm for picking a consenus method is, "Pick the highest method supported by more than 2/3 of the authorities currently voting." This can sometimes result in an insufficiently signed consensus. Instead, it should be so...Our current algorithm for picking a consenus method is, "Pick the highest method supported by more than 2/3 of the authorities currently voting." This can sometimes result in an insufficiently signed consensus. Instead, it should be something like, "Pick the highest method supported by more than 2/3 of the authorities currently voting, UNLESS the number of authorities supporting that method is less than the threshold needed to sign a valid consensus. In that case, pick the highest method supported by enough authorities to sign a valid consensus."
Alternatively, the algorithm could be something like, "Pick the highest method supported by enough authorities to sign a valid consensus", which I believe is mathematically identical to the above (more obviously safe) formulation.
This change would make some attacks harder for a hostile authority, and some attacks easier. It needs a design proposal and some analysis.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/2715Is rephist-calculated uptime the right metric for HSDir assignment?2020-06-13T14:09:22ZRoger DingledineIs rephist-calculated uptime the right metric for HSDir assignment?In #2709 we changed the HSDir flag to be based on each authority's opinion of the relay's uptime, rather than the relay's own opinion of its uptime.
Nick then asked if perhaps WFU would be a better measure. We should consider if there a...In #2709 we changed the HSDir flag to be based on each authority's opinion of the relay's uptime, rather than the relay's own opinion of its uptime.
Nick then asked if perhaps WFU would be a better measure. We should consider if there are smarter parameters to consider.
See also #2714.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/3023Tor directory authorities should not act as regular relays/hsdirs2020-06-13T14:10:06ZSebastian HahnTor directory authorities should not act as regular relays/hsdirsIn the past, it made sense to use directory authorities for all other network functions too, because they provided a significant contribution to the network's available bandwidth. Now that this isn't so anymore, and we're starting to see...In the past, it made sense to use directory authorities for all other network functions too, because they provided a significant contribution to the network's available bandwidth. Now that this isn't so anymore, and we're starting to see more and more bugs where the dirauths also act as relays, we should change that so the dirauths can focus on providing a consensus and bootstrapping functionality.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/3029We should save received documents before parsing them2020-06-13T14:10:08ZNick MathewsonWe should save received documents before parsing themWe should have an option to make Tor save every document it receives from the network before it tries to parse it. That way, if we crash while we're handling the document, we can know what crashed us.
Also, everything that stores an un...We should have an option to make Tor save every document it receives from the network before it tries to parse it. That way, if we crash while we're handling the document, we can know what crashed us.
Also, everything that stores an unparseable/unreadable thingy should be able to save more than one of them.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/3241Seeing lots of "crypto error while reading public key from string" on DA2020-06-13T14:10:45ZLinus Nordberglinus@torproject.orgSeeing lots of "crypto error while reading public key from string" on DAI have about 200 of these (in 20 hours) on my DA:
May 18 21:06:05.183 [warn] crypto error while reading public key from string: too long (in asn1 encoding routines:ASN1_get_object)
May 18 21:06:05.183 [warn] crypto error while reading p...I have about 200 of these (in 20 hours) on my DA:
May 18 21:06:05.183 [warn] crypto error while reading public key from string: too long (in asn1 encoding routines:ASN1_get_object)
May 18 21:06:05.183 [warn] crypto error while reading public key from string: bad object header (in asn1 encoding routines:ASN1_CHECK_TLEN)
May 18 21:06:05.183 [warn] crypto error while reading public key from string: nested asn1 error (in asn1 encoding routines:ASN1_D2I_EX_PRIMITIVE)
May 18 21:06:05.183 [warn] crypto error while reading public key from string: nested asn1 error (in asn1 encoding routines:ASN1_TEMPLATE_NOEXP_D2I)
May 18 21:06:05.183 [warn] crypto error while reading public key from string: ASN1 lib (in PEM routines:PEM_ASN1_read_bio)
May 18 21:06:05.183 [warn] parse error: Couldn't parse public key.
May 18 21:06:05.183 [warn] Error tokenizing router descriptor.
May 18 21:06:05.183 [warn] Error reading extra-info: signature does not match.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/4363Dirauths should save a copy of a consensus that didn't get enough signatures2020-06-13T15:27:07ZSebastian HahnDirauths should save a copy of a consensus that didn't get enough signaturesBasically right now when a dirauth doesn't get the consensus it generated signed, we don't know what kind of consensus that dirauth wanted because it isn't valid (not enough signatures). We could save a copy so we can investigateBasically right now when a dirauth doesn't get the consensus it generated signed, we don't know what kind of consensus that dirauth wanted because it isn't valid (not enough signatures). We could save a copy so we can investigateTor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/4477Relays that are not directory authorities shouldn't load the approved-routers...2020-06-13T14:14:40ZLinus Nordberglinus@torproject.orgRelays that are not directory authorities shouldn't load the approved-routers filedirserv_load_fingerprint_file() is called from do_hup() and from
In do_hup() it's called if
```
authdir_mode_handles_descs(options, -1) != 0
```
In init_keys() it's called if
```
authdir_mode(options) != 0
```
This is inconsiste...dirserv_load_fingerprint_file() is called from do_hup() and from
In do_hup() it's called if
```
authdir_mode_handles_descs(options, -1) != 0
```
In init_keys() it's called if
```
authdir_mode(options) != 0
```
This is inconsistent and at least one of them is wrong. I'm not quite
sure exaclty who needs the fingerprints.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/4539Make dir auths write to disk digests that don't match2020-06-13T15:29:24ZLinus Nordberglinus@torproject.orgMake dir auths write to disk digests that don't matchmaatuska told me this the other day:
```
Nov 05 12:55:02.739 [warn] Unable to store signatures posted by 128.31.0.34: Mismatched digest.
```
And Sebastian had the idea that we should teach directory authorities to save mismatched diges...maatuska told me this the other day:
```
Nov 05 12:55:02.739 [warn] Unable to store signatures posted by 128.31.0.34: Mismatched digest.
```
And Sebastian had the idea that we should teach directory authorities to save mismatched digests to disk so that we can investigate them.
But before that, there was this log entry:
```
Nov 05 12:55:02.737 [warn] http status 400 ("Mismatched digest.") response after uploading signatures to dirserver '128.31.0.34:9131'. Please correct.
and
```
This makes me think that this might not be some local trouble on
maatuska but perhaps related to the communication between the
authorities. Broken TCP connection perhaps?
Adding this option should be easy enough for it to be worth it even if
we'll only find half a digest there or something so I say let's do it.
BTW, #1890 saw quite a few mismatched digests too.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/4581Dir auths should defend themselves from too many begindir requests per address2020-06-13T14:15:17ZRoger DingledineDir auths should defend themselves from too many begindir requests per address#4580 would not have been so bad if we'd had a "you already sent me 5 begindir cells and I haven't even learned what you wanted to request on them yet. I am going to refuse the sixth one." feature.
Alas, the bug causes us to make reques...#4580 would not have been so bad if we'd had a "you already sent me 5 begindir cells and I haven't even learned what you wanted to request on them yet. I am going to refuse the sixth one." feature.
Alas, the bug causes us to make requests over time, and that will cause us to have multiple OR conns open, so the defense cannot simply be "look at how many other streams we have open on this circuit". I guess some sort of map from IP address to count would do it?
I put this as an 0.2.2 milestone, but if the patch is complex I'll probably not be excited about backporting it.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/4626Very high cpu usage for gabelmoo running with renegotiation-limiting code2020-06-13T14:15:36ZSebastian HahnVery high cpu usage for gabelmoo running with renegotiation-limiting codeHey there,
gabelmoo is seeing almost full cpu utilization lately. I'm running openssl1 and libevent master. Traffic is at around 200KB/s, so not very much. Here's a profile for everything over 0.5%:
```
samples % image name ...Hey there,
gabelmoo is seeing almost full cpu utilization lately. I'm running openssl1 and libevent master. Traffic is at around 200KB/s, so not very much. Here's a profile for everything over 0.5%:
```
samples % image name app name symbol name
397332 26.8226 libc.so.6 libc.so.6 /home/karsten/debug/libc.so.6
210739 14.2263 libpthread.so.0 libpthread.so.0 __pthread_mutex_unlock_usercnt
157849 10.6559 libpthread.so.0 libpthread.so.0 pthread_mutex_lock
62969 4.2508 tor tor connection_handle_write
56998 3.8477 tor tor _openssl_locking_cb
44452 3.0008 tor tor assert_connection_ok
38146 2.5751 tor tor connection_bucket_write_limit
37917 2.5597 [vdso] (tgid:17627 range:0x7fffb85ff000-0x7fffb8600000) tor [vdso] (tgid:17627 range:0x7fffb85ff000-0x7fffb8600000)
32683 2.2063 tor tor flush_buf_tls
29224 1.9728 tor tor connection_is_rate_limited
28245 1.9067 tor tor connection_bucket_round_robin
25259 1.7052 tor tor tor_tls_get_error
22309 1.5060 tor tor tor_tls_write
21562 1.4556 tor tor assert_buf_ok
20642 1.3935 tor tor get_options_mutable
19521 1.3178 tor tor approx_time
19272 1.3010 tor tor _check_no_tls_errors
19108 1.2899 tor tor conn_write_callback
18312 1.2362 tor tor tor_addr_is_internal
14932 1.0080 tor tor tor_tls_get_forced_write_size
14237 0.9611 tor tor tor_gettimeofday_cache_clear
12501 0.8439 librt.so.1 librt.so.1 /home/karsten/debug/librt.so.1
11918 0.8045 tor tor tor_mutex_acquire
11907 0.8038 tor tor tor_mutex_release
11376 0.7680 tor tor connection_bucket_refill
9770 0.6595 tor tor connection_is_listener
9582 0.6468 tor tor connection_is_reading
9493 0.6408 tor tor tor_tls_state_changed_callback
9087 0.6134 tor tor connection_is_writing
8689 0.5866 tor tor TO_OR_CONN
7890 0.5326 tor tor connection_state_is_connecting
```Tor: unspecifiedGeorge KadianakisGeorge Kadianakishttps://gitlab.torproject.org/legacy/trac/-/issues/4631Idea to make consensus voting more resistant2020-06-13T15:51:31ZSebastian HahnIdea to make consensus voting more resistantThis is an idea how to improve the current situation, where sometimes a directory authority is slow to get its vote out to the other dirauths, and so the dirauths don't all have the same sets of votes. To simplify, I'm illustrating with ...This is an idea how to improve the current situation, where sometimes a directory authority is slow to get its vote out to the other dirauths, and so the dirauths don't all have the same sets of votes. To simplify, I'm illustrating with an example of three dirauths:
at :50, all dirauths make their vote and start uploading. auth1 and auth2 get their vote to all auths, but auth3 doesn't. it cannot publish a vote to auth1 at all, and it takes more than 2.5 minutes to publish its vote to auth2. at :52:30, all auths try fetching the votes they're missing from the other auths, so auth1 asks auth2 for auth3's vote, and auth2 asks auth1 for auth3's vote. auth3 asks nobody, and nobody asks auth3. At this point, neither auth1 nor auth2 have auth3's vote. auth3 now (at, for example, :53:30) succeeds publishing to auth2, so auth1 now has a vote from auth1 and auth2, auth2 and auth3 have a vote from auth1, auth2, and auth3. At :55 the auths try to make a consensus, but auth1 will end up with a different consensus than auth2 and auth3.
My idea to make this less of a problem would be that only for two minutes will we accept a vote that gets pushed to us, and anything we get later than that is considered "too late" and will be dropped. At :52:30 minutes, we still go ahead and try and fetch all votes from all the other authorities, and if they have a vote we will accept it. We repeat that fetching of all votes that we don't have at 53:00, 53:30, 54:00 and 54:30. That way, a delayed publication of the original vote will not cause this kind of split, where the dirauths have different opinions on who has voted, so only the dirauth that took more than 2 minutes to publish its descriptor to any of the other dirauths will be affected. There's still a race condition here, which is when a dirauth (within two minutes) only publishes to one other dirauth, and then that dirauth gets so slow it cannot get the vote to any of the other votes. But since it was fast enough to get the vote the first time, hopefully that's rather rare.
Does this all sound viable? Am I overlooking something?
Update: This bug was introduced in Tor 0.2.0.5-alpha, with the v3 authority voting code.Tor: 0.4.4.x-finalteorteor