Trac issueshttps://gitlab.torproject.org/legacy/trac/-/issues2020-06-13T15:05:15Zhttps://gitlab.torproject.org/legacy/trac/-/issues/4244Tor changes default value of DirReqStatistics, then wants to SAVECONF the new...2020-06-13T15:05:15ZRobert RansomTor changes default value of DirReqStatistics, then wants to SAVECONF the new default#4237 was caused by [ticket:4237#comment:2 an underlying Tor bug].
We should fix it, and someday we should redesign the configuration-handling code to make this class of bugs go away.#4237 was caused by [ticket:4237#comment:2 an underlying Tor bug].
We should fix it, and someday we should redesign the configuration-handling code to make this class of bugs go away.Tor: 0.2.6.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/4600Spec doesn't mention password quotes2020-06-13T14:43:31ZDamian JohnsonSpec doesn't mention password quotesSection 5.1 of the control-spec [1] provides a nice description of authentication, but doesn't mention how to handle quotes in the password. Unsurprisingly controllers are expected to provide escaped quotes...
<pre>
atagar@morrigan:~$ t...Section 5.1 of the control-spec [1] provides a nice description of authentication, but doesn't mention how to handle quotes in the password. Unsurprisingly controllers are expected to provide escaped quotes...
<pre>
atagar@morrigan:~$ tor --hash-password "this has a \" in it"
16:E6DC1BCEDF55EDCA607ADDB8781795772E42AAC75F7B7630B6227232E4
atagar@morrigan:~$ telnet localhost 9051
Connected to localhost.
AUTHENTICATE "this has a \" in it"
250 OK
</pre>
I'm gonna guess that only quotes should be escaped by controllers.
I've been finding it a little frustrating to figure out when and what escaping is expected so I'm generally working from the assumption that I should ignore escaping unless specifically called out by the spec (like it is for authentication cookie paths, though that wasn't enough to work from alone [2]).
Cheers! -Damian
[1] https://gitweb.torproject.org/torspec.git/blob/HEAD:/control-spec.txt#l1924
[2] https://gitweb.torproject.org/stem.git/blob/HEAD:/stem/socket.py#l54Nick MathewsonNick Mathewsonhttps://gitlab.torproject.org/legacy/trac/-/issues/5462Clients should alert the user if many guards are unreachable2020-06-13T14:18:30ZMike PerryClients should alert the user if many guards are unreachableIf the user is behind a restrictive firewall, in a censored location, or is otherwise restricted in the number of guards they can use, the Tor Client should inform them of this fact.
Depending upon the rate of guard failure, tor should ...If the user is behind a restrictive firewall, in a censored location, or is otherwise restricted in the number of guards they can use, the Tor Client should inform them of this fact.
Depending upon the rate of guard failure, tor should emit either a notice or a warn.
We should probably also perform a quick check to see if all guards are on a small subset of non-default ports, or perhaps just 80 or 443.Tor: 0.3.0.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/6027Directory authorities on IPv62020-06-13T14:29:48ZLinus Nordberglinus@torproject.orgDirectory authorities on IPv6Directory authorities don't know enough about IPv6. There are a lot
of issues here, two of which are mentioned in #4847:
- init_keys()
- dirserv_generate_networkstatus_vote_obj()Directory authorities don't know enough about IPv6. There are a lot
of issues here, two of which are mentioned in #4847:
- init_keys()
- dirserv_generate_networkstatus_vote_obj()Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/6088Gather data about possible transition to 2048bit RSA/DHE2020-06-13T14:40:17ZJacob AppelbaumGather data about possible transition to 2048bit RSA/DHEI propose that while prop 198 and others cover some crypto changes we need to make - I think they won't be made quickly enough. I think that we should jump to 2048bit rsa and 2048bit DHE as soon as possible. We should do this before 0.2....I propose that while prop 198 and others cover some crypto changes we need to make - I think they won't be made quickly enough. I think that we should jump to 2048bit rsa and 2048bit DHE as soon as possible. We should do this before 0.2.4.x (which nick says will enable TLS-ECDHE by default) as we have a long way before 0.2.4.x is even remotely available.
The first thing is that nick says:
<nickm> I want to know performance impact and fingerprintability.
This ticket should gather data on performance (RSA/DHE/etc) for servers and on the issue of fingerprintability (mitm filter/block/etc) where people use 2048bit DHE.
I've put this as a 02.3.x-final Milestone but it's likely this will change.Tor: 0.2.6.x-finalJacob AppelbaumJacob Appelbaumhttps://gitlab.torproject.org/legacy/trac/-/issues/6456Merge parse_client_transport_line() and parse_server_transport_line()2020-06-13T14:21:26ZGeorge KadianakisMerge parse_client_transport_line() and parse_server_transport_line()There is too much code duplication between `parse_client_transport_line()` and `parse_server_transport_line`. We should probably merge them into one function during 0.2.4.x.There is too much code duplication between `parse_client_transport_line()` and `parse_server_transport_line`. We should probably merge them into one function during 0.2.4.x.Tor: 0.2.6.x-finalAndrea ShepardAndrea Shepardhttps://gitlab.torproject.org/legacy/trac/-/issues/6852bridges (especially unpublished ones) should include usage info in their hear...2020-06-13T14:22:49ZRoger Dingledinebridges (especially unpublished ones) should include usage info in their heartbeatsAs part of SponsorJ task 2, we're running some fast unpublished bridges. Since they're unpublished, we have no way to learn how much usage they see. We should at least log them so the operators can send us the log snippets over time.
As...As part of SponsorJ task 2, we're running some fast unpublished bridges. Since they're unpublished, we have no way to learn how much usage they see. We should at least log them so the operators can send us the log snippets over time.
As a side benefit, logging will help the bridge operators who don't use Vidalia/arm and wonder whether their bridge is being helpful.Tor: 0.2.6.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/6938Log early log messages to log files2020-06-13T14:23:05ZshamrockLog early log messages to log filesIssue:
When configuring Tor as a bridge and to advertise a DirPort, the resulting warning is displayed on the console, but is not logged to the Tor log file.
Environment:
Debian squeeze amd64, latest patches
Tor version 0.2.4.2-alpha (g...Issue:
When configuring Tor as a bridge and to advertise a DirPort, the resulting warning is displayed on the console, but is not logged to the Tor log file.
Environment:
Debian squeeze amd64, latest patches
Tor version 0.2.4.2-alpha (git-0537dc6364594474)
How to reproduce:
Set "BridgeRelay 1"
Set "DirPort 80"
Start Tor.
The following warning will be displayed on the console:
"Starting tor daemon...<date will be here> [warn] Can't set a DirPort on a bridge relay; disabling DirPort
done."
No corresponding warning is logged to the Tor log file.Tor: 0.2.6.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/8239Hidden services should try harder to reuse their old intro points2020-06-13T14:46:24ZRoger DingledineHidden services should try harder to reuse their old intro pointsThe current hidden service behavior is that when Tor loses its intro circuits, it chooses new intro points and makes new circuits to them -- which means anybody who has the old hidden service descriptor is going to be introducing herself...The current hidden service behavior is that when Tor loses its intro circuits, it chooses new intro points and makes new circuits to them -- which means anybody who has the old hidden service descriptor is going to be introducing herself to the wrong intro points.
If our intro circuits close, but it was because our network failed and not because the intro points failed, we should reestablish new intro circuits to the *old* intro points.
Nathan wants this for running hidden services on Orbot, since Orbot users change networks (and thus lose existing circuits) quite often.
I expect the main tricky point to be distinguishing "network failed" from "intro point failed". I wonder if some of our CBT work is reusable here.Tor: 0.2.7.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/8243Getting the HSDir flag should require the Stable flag2020-06-13T14:57:52ZRoger DingledineGetting the HSDir flag should require the Stable flagWhen we invented the HSDir flag, our goal was to only use nodes for storing hidden service descriptors if they're likely enough to be around later. The question was solely around robustness: pick all but the nodes that have a good chance...When we invented the HSDir flag, our goal was to only use nodes for storing hidden service descriptors if they're likely enough to be around later. The question was solely around robustness: pick all but the nodes that have a good chance of going away while your hidden service descriptor is valid. We picked "has 25 hours of uptime" as what we hoped was an adequate threshold to stand in for the real question, which is "will likely remain online for the next hour".
But actually, there are security implications here too: an adversary who can control all six hsdir points for a hidden service can censor it (or, less bad, observe how many anonymous people access it).
So we should raise the bar for getting the HSDir flag, to raise the cost to an adversary who tries the Sybil the network in order to control lots of HSDir points.
That said, there's a contradiction here: the more restrictive we are about who gets the HSDir flag, the more valuable it becomes to get it. At the one extreme (our current choice), we give it to basically everybody, so you have to get a lot of them before your attack matters. At the other extreme, we could give it to our favorite 20 relays, and if we choose wisely then basically no adversaries will get the HSDir flag. What are the sweet spots in between?
(This ticket is inspired by rpw's upcoming Oakland paper)Tor: 0.2.7.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/8402Tor should help its transport proxy use a proxy, if needed.2020-06-13T18:34:56ZGeorge KadianakisTor should help its transport proxy use a proxy, if needed.If a censored user wants to use both a (normal) proxy and a pluggable transport proxy, Tor should psas the credentials of the (normal) proxy to the pluggable transport proxy.
The proxy chain should look like this:
`Tor (Client) -> Trans...If a censored user wants to use both a (normal) proxy and a pluggable transport proxy, Tor should psas the credentials of the (normal) proxy to the pluggable transport proxy.
The proxy chain should look like this:
`Tor (Client) -> Transport Proxy (Client) -> SOCKS/HTTP Proxy -> Internet -> Transport Proxy (Server) -> Tor (Bridge)`
Arturo prepared a related proposal in:
https://lists.torproject.org/pipermail/tor-dev/2012-February/003318.html
We should clean it up, see if anything is missing, and start implementing it in Tor.Tor: 0.2.6.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/8546Make a copy-able connection-config type to limit copy burden of isolation fla...2020-06-13T14:28:23ZNick MathewsonMake a copy-able connection-config type to limit copy burden of isolation flags, etcRight now, an increasingly large number of fields and flags are duplicated between port_cfg_t, listener_connection_t, and (say) entry_connection_t. Every field we add here needs to be added to every one of those types, and needs to be e...Right now, an increasingly large number of fields and flags are duplicated between port_cfg_t, listener_connection_t, and (say) entry_connection_t. Every field we add here needs to be added to every one of those types, and needs to be explicitly copied from each to the next during construction time.
It would make this code much more maintainable if there were a type that we just copied from object to object here.Tor: 0.2.6.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/8742Byte history leaks information about local usage/hidden services2020-06-13T14:28:47ZTracByte history leaks information about local usage/hidden servicesNot sure if this is related to #516.
When acting as a relay, Tor seems to collect and report on *all* incoming and outgoing bandwidth. This data is then published publicly on Atlas, torstatus, or available for download.
As an example,...Not sure if this is related to #516.
When acting as a relay, Tor seems to collect and report on *all* incoming and outgoing bandwidth. This data is then published publicly on Atlas, torstatus, or available for download.
As an example, if you look at the monthly graph, it's pretty clear this relay become "something more than a relay" around the 7th of April:
https://atlas.torproject.org/#details/85617CE64344948B0BAC23CD4E22245F7F66C1C8
An attacker could use this data to determine if a relay hosts a hidden service (generally more bytes written than read), or if a user was actively browsing/downloading (more bytes read, generally) during a certain period of time. An active attacker could then create a large amount of traffic to a hidden service, perhaps creating a known pattern of high traffic followed by a period of little traffic, then review the byte history again and look for any relays that displayed a difference of read/write similar to the generated traffic. Having narrowed down the candidates, a DDOS of the relay would provide confirmation. Exposing clients would of course be far more difficult, as most probably do not run as a relay.
Possible solutions:
*By default, don't count any traffic to/from a hidden service. Could be enabled optionally in torrc... if someone really wanted it.
*By default, don't count any traffic beginning at tor's socks port. I can't think of any reason someone would want to enable this... but if there is a good argument for it, perhaps provide an option in torrc for this too.
*Most drastically... let a user opt out of reporting byte history completely. I'm guessing this is a "no go", since the stats are needed to help better network performance.
**Trac**:
**Username**: alphawolfTor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/9273Brainstorm tradeoffs from moving to 2 (or even 1) guards2020-06-13T14:30:32ZRoger DingledineBrainstorm tradeoffs from moving to 2 (or even 1) guardsThere are now many conflicting issues to consider when changing the default number of guards. I'd like to write a proposal suggesting we move to 2 (or even 1), but I don't think I'm ready to write the analysis section yet.
Here's a star...There are now many conflicting issues to consider when changing the default number of guards. I'd like to write a proposal suggesting we move to 2 (or even 1), but I don't think I'm ready to write the analysis section yet.
Here's a start:
Pro 1: Reduces chance of using an adversary's guard. This argues for 1, but 2 would still be a lot better. See Tariq's WPES 2012 paper for details.
Pro 2: Reduces impact from guard fingerprinting: if the adversary learns that you have the following n guards, and later sees an anonymous user with the same guards, how likely is it to be you? Said another way, a trio of guards produces a cubic, whereas a duo of guards produces a quadratic. Somebody should do the math to sort out the chance of having all possible trios of guards, followed by the expected uniqueness of a trio. I expect moving to 2 gives the majority of the benefit here.
Con 1: Increases the variance of performance. The more guards you have, the closer to average performance you'll be. Whereas if you have just one guard, your performance will be impacted a lot by that choice. It would seem that we need to raise the bar on getting the Guard flag if we move people to having just one guard.
Con 2: Moving to 1 guard will rule out a Conflux-style design. But 2 guards would still work fine.
What did I miss?Tor: 0.2.6.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/9286ordb1 uses milliseconds in its descriptor, spec says it can't2020-06-13T14:34:53ZRoger Dingledineordb1 uses milliseconds in its descriptor, spec says it can't```
router ordb1 213.246.53.127 8002 0 0
platform Tor 0.2.3.25 on Linux x86_64
opt protocols Link 1 2 Circuit 1
published 2013-07-17 13:38:46.992
```
But dir-spec.txt says
```
"published" YYYY-MM-DD HH:MM:SS NL
[Exactly once...```
router ordb1 213.246.53.127 8002 0 0
platform Tor 0.2.3.25 on Linux x86_64
opt protocols Link 1 2 Circuit 1
published 2013-07-17 13:38:46.992
```
But dir-spec.txt says
```
"published" YYYY-MM-DD HH:MM:SS NL
[Exactly once]
The time, in UTC, when this descriptor (and its corresponding
extra-info document if any) was generated.
```
It looks like it's violating the spec. Should we (i.e. the directory authorities) have validated and refused the descriptor?
Is it our Tor implementation that does this on a weird edge case, or did somebody mess with something?
(Noticed because contrib/exitlist can't handle it.)Tor: 0.2.6.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/9321Load balance right when we have higher guard rotation periods2020-06-13T16:48:33ZRoger DingledineLoad balance right when we have higher guard rotation periodsHere's our plan:
1) Directory authorities need to track how much of the past n months each relay was around and had the Guard flag.
2) They vote a percentage for each relay in their vote, and the consensus has a new keyword on the w lin...Here's our plan:
1) Directory authorities need to track how much of the past n months each relay was around and had the Guard flag.
2) They vote a percentage for each relay in their vote, and the consensus has a new keyword on the w line so clients can learn how Guardy each relay has been.
3) Clients change their load balancing algorithm to consider how Guardy you've been, rather than just treating Guard status as binary (#8453).
4) Raise the guard rotation period a lot (#8240).Tor: 0.2.6.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/9495Must we still disable threads on *-*-solaris*?2020-06-13T14:43:41ZNick MathewsonMust we still disable threads on *-*-solaris*?Back in 2005, in 8753e7ef6530c14a6d35c477a11ff203008bde50 (svn:r4383), we disabled threading on Solaris, in order to prevent some lockup bug or other. Unfortunately, back in 2005 we weren't so good at tracking bugs, so I can't easily fi...Back in 2005, in 8753e7ef6530c14a6d35c477a11ff203008bde50 (svn:r4383), we disabled threading on Solaris, in order to prevent some lockup bug or other. Unfortunately, back in 2005 we weren't so good at tracking bugs, so I can't easily find who reported it or how we diagnosed it.
But this is eight years later. If there was really a platform bug, surely it's gotten better by now?
We could contact one of the two or three operators whose nodes report being "on SunOS", and ask them if their nodes still work after an explicit --enable-threads , I guess.Tor: 0.2.6.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/9580Tor should accept combined pluggable transport names2020-06-13T18:34:31ZGeorge KadianakisTor should accept combined pluggable transport namesThe plan for #7167 is to have flashproxy understand pluggable transporst like "websocket|obfs2", that is the combination of websocket and obfs2.
The good thing about our plan for #7167 is that it requires no real modifications to little...The plan for #7167 is to have flashproxy understand pluggable transporst like "websocket|obfs2", that is the combination of websocket and obfs2.
The good thing about our plan for #7167 is that it requires no real modifications to little-t-tor. However, in little-t-tor we do some checks on the transport names (in torrc, etc.) and ensure that they are C-identifiers -- but "websocket|obfs2" is not a C-identifier.
We should relax those checks so that they don't choke when we give them "websocket|obfs2".https://gitlab.torproject.org/legacy/trac/-/issues/9635Tor clients warn when they use the wrong ntor onion key2020-06-13T14:47:48ZbastikTor clients warn when they use the wrong ntor onion keyI got this warnings (exactly once, for the first time I'm aware of) on my 0.2.4.16-rc bridge on Windows.
Strangely I found no tickets for any of these failures.
Aug 31 09:37:10.793 [Warning] onion_skin_client_handshake failed.
Aug 31 0...I got this warnings (exactly once, for the first time I'm aware of) on my 0.2.4.16-rc bridge on Windows.
Strangely I found no tickets for any of these failures.
Aug 31 09:37:10.793 [Warning] onion_skin_client_handshake failed.
Aug 31 09:37:10.793 [Warning] circuit_finish_handshake failed.
Aug 31 09:37:10.794 [Warning] connection_edge_process_relay_cell (at origin) failed.
I'm unsure what _client means. Did my client (yes, I use my bridge's client functions) fail or did the client of an connecting bridge user fail?
Client functionality is not affected.Tor: 0.2.6.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/9709We accept way more tap cells than we process2020-06-13T14:31:49ZRoger DingledineWe accept way more tap cells than we processOur fix in #7291 was meant to have us turn away onionskins that we're unlikely to get to. But in practice our #9658 patch shows that we're accepting way more than we process.
Linus briefly did a test where he cherry-picked the #9658 pat...Our fix in #7291 was meant to have us turn away onionskins that we're unlikely to get to. But in practice our #9658 patch shows that we're accepting way more than we process.
Linus briefly did a test where he cherry-picked the #9658 patch onto 0.2.4.16-rc and it was still only handling about 25% of incoming requests. His cursory analysis was that he was dropping them with the
```
log_info(LD_CIRC,
"Circuit create request is too old; canceling due to overload.");
```
line.
Should we be refusing these earlier, so clients can know to go elsewhere?
One possible culprit is that the main Tor thread is too busy to give cpuworker events out on time.Tor: 0.2.6.x-final