Trac issueshttps://gitlab.torproject.org/legacy/trac/-/issues2020-06-13T14:51:07Zhttps://gitlab.torproject.org/legacy/trac/-/issues/3199Refactor periodic events2020-06-13T14:51:07ZNick MathewsonRefactor periodic eventsRight now we invoke a truly huge array of things from second_elapsed_callback() and run_scheduled_events(). There are at least 23 separate static time_t values that we do a comparison against every time a second elapses.
This is a real...Right now we invoke a truly huge array of things from second_elapsed_callback() and run_scheduled_events(). There are at least 23 separate static time_t values that we do a comparison against every time a second elapses.
This is a really goofy way to handle periodic events. Let's refactor this to instead use libevent's timing code. In addition, we could make the timers first-class, so as to allow better introspection of Tor's status wrt each timer.Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/6027Directory authorities on IPv62020-06-13T14:29:48ZLinus Nordberglinus@torproject.orgDirectory authorities on IPv6Directory authorities don't know enough about IPv6. There are a lot
of issues here, two of which are mentioned in #4847:
- init_keys()
- dirserv_generate_networkstatus_vote_obj()Directory authorities don't know enough about IPv6. There are a lot
of issues here, two of which are mentioned in #4847:
- init_keys()
- dirserv_generate_networkstatus_vote_obj()Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/6800An attacker can flood network with new relays to make us stop using bwauth we...2020-06-13T14:22:32ZRoger DingledineAn attacker can flood network with new relays to make us stop using bwauth weightsThe bwauths don't write out any opinions if they have stats on less than some fraction (60%) of the relays.
So an attacker could induce this result by signing up n new relays to go with the n current relays, causing all the bwauths to s...The bwauths don't write out any opinions if they have stats on less than some fraction (60%) of the relays.
So an attacker could induce this result by signing up n new relays to go with the n current relays, causing all the bwauths to stop outputting opinions.
In the current case that means we default to using the values in the relay descriptors. Inefficient but not so bad.
In the future case (once we merge #2286), it means we default to capping all new relays to a low number until the bwauths catch up again.
Authorities are willing to use the last published opinions file for 3 days before they give up on it.
Is this a stable enough defense? During the flood the already-established relays would continue to have the most recent bwauth weights, and the bwauths have 3 days to catch up. Sounds plausible, but I'd like a few more opinions.Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/8195tor and capabilities2020-06-13T15:07:30Zweasel (Peter Palfrader)tor and capabilitiesWe should figure out what it takes to keep the CAP_NET_BIND_SERVICE capability when changing the user away from root, so that we can re-open low listening ports later again.We should figure out what it takes to keep the CAP_NET_BIND_SERVICE capability when changing the user away from root, so that we can re-open low listening ports later again.Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/9971for_discovery option in add_an_entry_guard() is confusingly named2020-06-13T14:32:38ZRoger Dingledinefor_discovery option in add_an_entry_guard() is confusingly namedIn #9946 I added a new argument "for_discovery" to add_an_entry_guard(). Nick prefers "provisional" or "probationary".
In parallel, I think we should probably rename the made_contact field in entry guard t, to be *why* we're remembering...In #9946 I added a new argument "for_discovery" to add_an_entry_guard(). Nick prefers "provisional" or "probationary".
In parallel, I think we should probably rename the made_contact field in entry guard t, to be *why* we're remembering that we've made contact, rather than simply that we have.
And lastly, we should do something about the godawful number of int arguments that add_an_entry_guard() now takes.Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/12538Make all relays automatically be dir caches2020-06-13T18:13:30ZcypherpunksMake all relays automatically be dir cachesDuring the entry guard discussions, we have decided that it's a good idea to make all relays directory servers. We mainly needed the entry guards to be directories, but it seems easier and more elegant to just turn all relays to director...During the entry guard discussions, we have decided that it's a good idea to make all relays directory servers. We mainly needed the entry guards to be directories, but it seems easier and more elegant to just turn all relays to directory servers.
This is easier nowadays than in the past because `BEGIN_DIR` makes it so that directory servers don't need to have a separate DirPort open. (However, maybe relays get the `V2Dir` flag only if they have a DirPort open?)
Also, since all relays have all the directory documents anyway, it doesn't further bloat their disk to become directory servers.Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/13192Collect aggregate stats of total hidden service usage vs total exit usage in ...2020-06-13T14:41:11ZRoger DingledineCollect aggregate stats of total hidden service usage vs total exit usage in Tor networkI'd like to know what fraction of total Tor usage is hidden service usage, so we have a sense of whether hidden services matter now, and so we can track trends into the future.
For example, it would have been nice in August 2013 to have...I'd like to know what fraction of total Tor usage is hidden service usage, so we have a sense of whether hidden services matter now, and so we can track trends into the future.
For example, it would have been nice in August 2013 to have some metric of hidden service fraction that told us the spike in load and users had to do with hidden services.
Such statistics would also be useful to counter (or who knows, confirm) the analysts who say statements like "97% of Tor use is silk road".Tor: 0.2.8.x-finalteorteorhttps://gitlab.torproject.org/legacy/trac/-/issues/13206Write up walkthrough of control port events when accessing a hidden service2020-06-13T14:38:54ZRoger DingledineWrite up walkthrough of control port events when accessing a hidden serviceI've been helping some other SponsorR folks get up to speed on reading controller events when accessing a pile of hidden services. In theory the controller events should help you understand how far we got at reaching a hidden service whe...I've been helping some other SponsorR folks get up to speed on reading controller events when accessing a pile of hidden services. In theory the controller events should help you understand how far we got at reaching a hidden service when the connection fails. In practice it's a bit overwhelming.
I sat down in person to walk through the controlport output, but I should write it up as e.g. a wiki file so my explanation is usable by more people later too.Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/13339Prop140: Complete Consensus diffs / Merge GSoC project2020-06-13T14:39:24ZmvdanProp140: Complete Consensus diffs / Merge GSoC projectGoogle Summer of Code finished over a month ago, and during this time I've been tidying up my code a bit and reading it for the merge. You will find it on github:
https://github.com/mvdan/tor
This ticket is for the sole purpose of foll...Google Summer of Code finished over a month ago, and during this time I've been tidying up my code a bit and reading it for the merge. You will find it on github:
https://github.com/mvdan/tor
This ticket is for the sole purpose of following the merge process and its progress. But as always I'm on IRC and mail if you want to contact me directly.
I just rebased against master this morning. Nick and Sebastian have been reviewing my code over the summer, but of course more sets of eyes are needed.
The test coverage for the diff generation and application is fine (see test_consdiff.c), but there aren't any tests for the stuff I wrote to wire it into serving and fetching consensus diffs. Not really sure how to go about that, can't really promise I'd have the time to dive into it.
And regarding commit messages and changelog entries, I pretty much went with my instinct. Chances are they can be improved - the commit messages for future reference and the changelog entries for future release changelogs - so criticism is welcome.Tor: 0.3.1.x-finalNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/legacy/trac/-/issues/14881incorrect defaults when producing bandwidth-weights line in directory footer2020-07-31T12:42:39ZRob Jansenincorrect defaults when producing bandwidth-weights line in directory footerWhen running Tor in small testing networks, much of the time the bandwidth-weights line does not appear in the directory-footer in the consensus files. The log file shows messages like this:
```
Consensus with empty bandwidth: G=852123 ...When running Tor in small testing networks, much of the time the bandwidth-weights line does not appear in the directory-footer in the consensus files. The log file shows messages like this:
```
Consensus with empty bandwidth: G=852123 M=0 E=0 D=569253 T=1421376
```
The code that counts up these bandwidth values is in `networkstatus_compute_consensus` in `dirvote.c`, specifically around [line 1590 in Tor master as of now](https://gitweb.torproject.org/tor.git/tree/src/or/dirvote.c#n1590).
The code that prints this error is in `networkstatus_compute_bw_weights_v10` in `dirvote.c`.
I believe that it is an error not to produce bandwidth-weights in the event that we have no knowledge of bandwidth for a given position. For example, if D is zero because there are no nodes that serve as exits+guards, shouldn't we just adjust the weights accordingly? We may still have functional guards and functional exits just because we have no node that serves as both.
Since this is for weighting purposes, why are T, D, E, G, and M all initialized to 0 instead of 1? I think the default weight should be 1, meaning all positions are selected equally, and any bandwidth above 1 should be used to increase the weight. Does this sound right?
If that is not desired, then I request that we at least initialize these values to one for testing networks. One patch is attached for each of these options.Tor: 0.3.0.x-finalpastlypastlyhttps://gitlab.torproject.org/legacy/trac/-/issues/15545Document TOR_PT_EXIT_ON_STDIN_CLOSE in the pt-spec.2021-11-15T18:55:15ZYawning AngelDocument TOR_PT_EXIT_ON_STDIN_CLOSE in the pt-spec.This is the ticket for the documentation side of the `TOR_PT_EXIT_ON_STDIN_CLOSE` and associated behavior that was implemented as part of #15435.
`pt-spec.txt` needs to document that if the env var is set to `1`, then PTs should assume ...This is the ticket for the documentation side of the `TOR_PT_EXIT_ON_STDIN_CLOSE` and associated behavior that was implemented as part of #15435.
`pt-spec.txt` needs to document that if the env var is set to `1`, then PTs should assume that tor will keep stdin open, and to treat stdin being closed as the same as a `SIGTERM` (graceful shutdown immediately).Tor: 0.2.8.x-finalYawning AngelYawning Angelhttps://gitlab.torproject.org/legacy/trac/-/issues/16382man page has misleading info about the min bw rate2020-06-13T14:46:57ZNima Fatemiman page has misleading info about the min bw rate```
GENERAL OPTIONS
BandwidthRate N bytes|KBytes|MBytes|GBytes|KBits|MBits|GBits
A token bucket limits the average incoming bandwidth usage on this node to the
specified number of bytes per second, and the av...```
GENERAL OPTIONS
BandwidthRate N bytes|KBytes|MBytes|GBytes|KBits|MBits|GBits
A token bucket limits the average incoming bandwidth usage on this node to the
specified number of bytes per second, and the average outgoing bandwidth usage to that
same value. If you want to run a relay in the public network, this needs to be at the
very least 30 KBytes (that is, 30720 bytes). (Default: 1 GByte)
```
The number has been lifted to 250KBytes in our [online documentation](https://www.torproject.org/docs/tor-relay-debian) and this one should probably get fixed too. Anything below 250KBytes (each direction) is probably hurting the network.Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/16651Tor fails to build on OpenBSD 5.8 due to libevent config options2020-06-16T01:26:59ZteorTor fails to build on OpenBSD 5.8 due to libevent config optionsCan we apply the patch in this thread?
http://lists.nycbug.org/pipermail/tor-bsd/2015-July/000328.htmlCan we apply the patch in this thread?
http://lists.nycbug.org/pipermail/tor-bsd/2015-July/000328.htmlTor: 0.2.7.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/16861Pad Tor connections to collapse netflow records2020-06-13T14:54:41ZMike PerryPad Tor connections to collapse netflow recordsThe collection of traffic statistics from routers is quite common. Recently, there was a minor scandal when a University network administrator upstream of UtahStateExits (and UtahStateMeekBridge) posted that they had collected over 360G ...The collection of traffic statistics from routers is quite common. Recently, there was a minor scandal when a University network administrator upstream of UtahStateExits (and UtahStateMeekBridge) posted that they had collected over 360G of netflow records to boingboing:
https://lists.torproject.org/pipermail/tor-relays/2015-August/007575.html
Unfortunately, the comment has since disappeared, but the tor-relays archives preserve it.
This interested me, so I asked some questions about the defaults and record resolution, and did some additional searching. It turns out that Cisco IOS routers have an "inactive flow timeout" that by default is 15 seconds, and it can't be set lower than 10 seconds. What this timeout does is cause the router to emit a new netflow "record" for a connection that is idle for that long, even if it stays open. Several other routers have similar settings. The Fortinet default is also 15 seconds for this. For Juniper, it is also 30 seconds (but Juniper routers can set it as low as 4 seconds).
With this information, I decided to write a patch that sends padding on a client's Tor connection bidirectionally at a random interval that we can control from the consensus, with a default of 4s-14s. It only sends padding if the connection is idle. It does not pad connections that are used only for tunneled directory traffic.
It also gives us the ability to control how long we keep said connections open. Since the default netflow settings for Cisco also generate a record for active flows after 30 minutes, it doesn't make a whole lot of sense to pad beyond that point.
This should mean that the total overhead for this defense is very low, especially since we have recently moved to only one guard. Well under 50 bytes/second for at most 30 minutes.
I still have a few questions, though, which is why I put so many people in Cc to this ticket. I will put my questions in the first comment.Tor: 0.3.1.x-finalMike PerryMike Perryhttps://gitlab.torproject.org/legacy/trac/-/issues/17003Improve test coverage on src/or/directory.c2020-06-13T14:48:50ZTracImprove test coverage on src/or/directory.cRelated branch on github (https://github.com/twstrike/tor/tree/directory-tests)
I believe it's related to #16805
**Trac**:
**Username**: rjuniorRelated branch on github (https://github.com/twstrike/tor/tree/directory-tests)
I believe it's related to #16805
**Trac**:
**Username**: rjuniorTor: 0.2.8.x-final