Trac issueshttps://gitlab.torproject.org/legacy/trac/-/issues2022-03-22T13:25:53Zhttps://gitlab.torproject.org/legacy/trac/-/issues/18517meek is broken in Tor Browser 6.0a32022-03-22T13:25:53ZGeorg Koppenmeek is broken in Tor Browser 6.0a3meek does not work any longer in Tor Browser 6.0a3. It seems this is caused by an underlying bug in tor. After some amount of testing and bisecting commit 23b088907fd23da417f5caf2b7b5f664f317ef4a is the first that introduces the new beha...meek does not work any longer in Tor Browser 6.0a3. It seems this is caused by an underlying bug in tor. After some amount of testing and bisecting commit 23b088907fd23da417f5caf2b7b5f664f317ef4a is the first that introduces the new behavior. Trying to start meek with it results in
```
Mar 10 13:50:53.000 [notice] Ignoring directory request, since no bridge nodes are available yet.
Mar 10 13:50:54.000 [notice] Delaying directory fetches: No running bridges
```
and nothing thereafter: the startup is stalled.Tor: 0.2.8.x-finalteorteorhttps://gitlab.torproject.org/legacy/trac/-/issues/2Key rotation crashes server (sample)2022-03-22T13:21:19Zweasel (Peter Palfrader)Key rotation crashes server (sample)[Moved from bugzilla]
Description:
Opened: 2003-05-29 08:26
(This is a sample bug so I can get used to using the tracker, and see whether
bugs get assigned to me properly.)
When key rotation happens (at midnight GMT), the server go...[Moved from bugzilla]
Description:
Opened: 2003-05-29 08:26
(This is a sample bug so I can get used to using the tracker, and see whether
bugs get assigned to me properly.)
When key rotation happens (at midnight GMT), the server goes into an infinite
loop, exhausts its fds, then dies.
------- Additional Comments From Nick Mathewson 2003-05-29 08:29 -------
The bug seemed to be that we'd reschedule the next key generation before the
current one was complete. Obviously, before the keygen is done, the server will
think that a new keygen needs to happen immediately. (ick!) I think I squashed
this, but it's hard to be sure. I'll know at midnight GMT tomorrow.
------- Additional Comments From Nick Mathewson 2003-05-30 21:09 -------
None of my servers died at midnight; I think we're ok.
[Automatically added by flyspray2trac: Operating System: Linux]Nick MathewsonNick Mathewsonhttps://gitlab.torproject.org/legacy/trac/-/issues/247Tor doesn't seem to work with Network configuration2022-03-22T13:21:19ZTracTor doesn't seem to work with Network configurationRunning OS X 10.3.9 with Tor installed; Network settings pointing to Privoxy on 127.0.0.1, port 8118 for HTTP,
HTTPS, and Gopher. Using Safari 1.3, Camino 1.0b1 and Firefox 1.5 with SwitchProxy and the browser's Connection
settings poin...Running OS X 10.3.9 with Tor installed; Network settings pointing to Privoxy on 127.0.0.1, port 8118 for HTTP,
HTTPS, and Gopher. Using Safari 1.3, Camino 1.0b1 and Firefox 1.5 with SwitchProxy and the browser's Connection
settings pointing to Privoxy as well.
Connecting to ipid.shat.net/ using Safari or Camino shows an ip address in a different part of the country or
world, as expected. Using Firefox 1.5 with SwitchProxy turned off (Network settings for OS X are still enabled), I get an
ip address for my ISP, even though they're not running any Tor servers. If I use Network Utility to do a Whois lookup,
it also says I'm coming from my ISP; same if I go to www.dnsstuff.com (Java and JavaScript are off). Turning on SwitchProxy
(even while Network settings are enabled) and trying again gets me an IP address in a different part of the country or world,
as it should be.
This makes no sense, and it doesn't inspire confidence, since I get different results depending on which browser I'm using.
I followed the directions for installation and setup, so I can only presume it's a bug of some sort, perhaps with OS X, or Tor.
[Automatically added by flyspray2trac: Operating System: OSX 10.4 Tiger]
**Trac**:
**Username**: dedwardshttps://gitlab.torproject.org/legacy/trac/-/issues/24378Prune the list of supported consensus methods2022-03-22T13:18:53ZteorPrune the list of supported consensus methodsWe currently have 13 supported consensus methods.
In 0.3.3, it is likely that prop282 will add 1 more, and prop283 will add 2 more.
Maybe we should prune this list eventually, because it will let us simplify our code, and make votes sm...We currently have 13 supported consensus methods.
In 0.3.3, it is likely that prop282 will add 1 more, and prop283 will add 2 more.
Maybe we should prune this list eventually, because it will let us simplify our code, and make votes smaller, less expensive to calculate, and reduce authority RAM requirements (due to fewer microdescs).
It has almost no impact on consensus size.
Here's how we could work out what to prune:
By mandatory feature:
We are currently locked into using consensus method 16 or later in the public network, because 0.2.9 and later require ntor keys, and 0.2.9 clients use microdescriptors by default.
We may add more mandatory features in 0.3.3 and 0.3.4.
By supported tor version:
On May 1, 2018, we will stop supporting 0.2.5, and only support 0.2.9 and later. This means that all supported non-alpha versions will support consensus methods 25 and later. (Or, if we count 0.2.9 alpha versions, it's 22 and later.)Tor: 0.3.4.x-finalNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/legacy/trac/-/issues/25573Track half-closed stream IDs2022-03-22T13:18:09ZMike PerryTrack half-closed stream IDsIn order to eliminate a side channel attack described in https://petsymposium.org/2018/files/papers/issue2/popets-2018-0011.pdf ("DropMark" attack) we need a way to determine if a stream id is invalid.
Many clients (particularly Firefox...In order to eliminate a side channel attack described in https://petsymposium.org/2018/files/papers/issue2/popets-2018-0011.pdf ("DropMark" attack) we need a way to determine if a stream id is invalid.
Many clients (particularly Firefox) will hang up on streams that still have data in flight. In this case, Tor clients send RELAY_COMMAND_END when they are done with a stream, and immediately remove that stream ID from their valid stream mapping. The remaining application data continues to arrive, but is silently dropped by the Tor client. The result is that this ignored stream data currently can't be distinguished from injected dummy traffic with completely random stream IDs, and this fact can be used to mount side channel attacks.
A similar situation exists for spurious RELAY_ENDs.Tor: 0.3.4.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/15516Consider rate-limiting INTRODUCE2 cells when under load2022-03-22T13:12:44ZJohn BrooksConsider rate-limiting INTRODUCE2 cells when under loadIn #15463, we're seeing an effective denial of service against a HS with a flood of introductions. The service falls apart trying to build rendezvous circuits, resulting in 100% CPU usage, many failed circuits, and impact on the guard.
...In #15463, we're seeing an effective denial of service against a HS with a flood of introductions. The service falls apart trying to build rendezvous circuits, resulting in 100% CPU usage, many failed circuits, and impact on the guard.
We should consider dropping INTRODUCE2 cells when the HS is under too much load to build rendezvous circuits successfully. It's much better if the HS response in this situation is predictable, instead of hammering at the guard until something falls down.
One option is to add a HSMaxConnectionRate(?) option defining the number of INTRODUCE2 we would accept per 10(?) minutes, maybe with some bursting behavior. It's unclear what a useful default value would be.
We could try to use a heuristic based on when rend circuits start failing, but it's not obvious to me how that would work.Tor: unspecifiedDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/legacy/trac/-/issues/296072019 Q1: Denial of service on v2 and v3 onion service2022-03-22T13:12:44ZTrac2019 Q1: Denial of service on v2 and v3 onion serviceDear tor team,
We have setup a discussion board, on the tor network.
And there is someone that is exploiting within our servers, by taking it down it every time and the forums will respond with "Server not found".
We are pretty sure this...Dear tor team,
We have setup a discussion board, on the tor network.
And there is someone that is exploiting within our servers, by taking it down it every time and the forums will respond with "Server not found".
We are pretty sure this problem is on the side of the TOR browser, is there anything we could do to sort this?
With many thanks for taking time into reading this.
**Trac**:
**Username**: pidginTor: 0.4.3.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/6Confused server clocks can screw up timing2022-03-22T13:04:03Zweasel (Peter Palfrader)Confused server clocks can screw up timing[Moved from bugzilla]
Reporter: nickm@alum.mit.edu (Nick Mathewson)
Description:
Opened: 2003-08-29 20:44
Some users have reported that the mixminion server has a nasty failure mode when
a server's clock moves backwards by a large i...[Moved from bugzilla]
Reporter: nickm@alum.mit.edu (Nick Mathewson)
Description:
Opened: 2003-08-29 20:44
Some users have reported that the mixminion server has a nasty failure mode when
a server's clock moves backwards by a large interval. When the server asks
"when did we last (do something)", the answer "tomorrow" can cause crashes or
weird behavior.
I'm deferring this for a while, because (a) I want to get 0.0.5 put to bed, and
(b) the workaround is trivial: keep your clock set right.
[Automatically added by flyspray2trac: Operating System: All]Nick MathewsonNick Mathewsonhttps://gitlab.torproject.org/legacy/trac/-/issues/2511Tor will use an unconfigured bridge if it was a configured bridge last time y...2022-03-22T13:03:41ZRoger DingledineTor will use an unconfigured bridge if it was a configured bridge last time you ran TorIf you configure your Tor client with
```
usebridges 1
bridge 128.31.0.34:9009
```
and you run it and it works, then Tor will end up writing two things to disk: 1) a @purpose bridge descriptor for 128.31.0.34 in your cached-descriptors f...If you configure your Tor client with
```
usebridges 1
bridge 128.31.0.34:9009
```
and you run it and it works, then Tor will end up writing two things to disk: 1) a @purpose bridge descriptor for 128.31.0.34 in your cached-descriptors file:
```
@downloaded-at 2011-02-08 07:54:52
@source "128.31.0.34"
@purpose bridge
router bridge 128.31.0.34 9009 0 0
...
```
and 2) an entry guard stanza in your state file:
```
EntryGuard bridge 4C17FB532E20B2A8AC199441ECD2B0177B39E4B1
EntryGuardAddedBy 4C17FB532E20B2A8AC199441ECD2B0177B39E4B1 0.2.3.0-alpha-dev 2011-02-01 18:43:23
```
Then if you kill your Tor and run it with
```
usebridges 1
bridge 150.150.150.150:9009
```
it will successfully bootstrap -- using the bridge that worked before but isn't your requested bridge.Tor: 0.2.2.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/1090Warning about using an excluded node for exit2022-03-22T13:03:41ZSebastian HahnWarning about using an excluded node for exitQuite a few people have reported warnings in their logs when using
Exclude*Nodes in their torrc. We should track down why this happens,
and fix.
I opened the bug so we can keep track of ideas/problem reports. A
typical log line would be...Quite a few people have reported warnings in their logs when using
Exclude*Nodes in their torrc. We should track down why this happens,
and fix.
I opened the bug so we can keep track of ideas/problem reports. A
typical log line would be
[Warning] Requested exit node '..' is in ExcludeNodes or ExcludeExitNodes.. Using anyway.
[Automatically added by flyspray2trac: Operating System: All]Tor: 0.2.2.x-finalNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/legacy/trac/-/issues/7028Implement Adaptive Padding or some variant and measure overhead vs accuracy2022-03-22T13:02:42ZMike PerryImplement Adaptive Padding or some variant and measure overhead vs accuracyAs a defense against Website Traffic Fingerprinting, we should implement a tunable cover traffic defense that we could set from the consensus with a value dependent upon available Guard bandwidth relative to Exit capacity.
My favorite f...As a defense against Website Traffic Fingerprinting, we should implement a tunable cover traffic defense that we could set from the consensus with a value dependent upon available Guard bandwidth relative to Exit capacity.
My favorite from the research literature is http://freehaven.net/anonbib/cache/ShWa-Timing06.pdf, because it appears to be tunable in this fashion.
The "BUFLO" variant proposed by this paper is better specified, but it's not clear it actually performs better for a given overhead quantity: http://www.cs.sunysb.edu/~xcai/fp.pdf
This is likely a research task. People who attempt it should also read http://www.raid-symposium.org/raid99/PAPERS/Axelsson.pdf (Slides: http://www.cse.psu.edu/~tjaeger/cse543-f06/presents/Kiran_baserate.pdf)Tor: 0.4.0.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/7757Maybe revisit node flag weight calculations2022-03-22T13:02:42ZMike PerryMaybe revisit node flag weight calculationsDirectory guards (#6526) are going to shift some weight off of middle nodes and on to guard nodes. This potentially changes the weights we should give to guard nodes for the middle hop (see dir-spec.txt Section 3.5.3: https://gitweb.torp...Directory guards (#6526) are going to shift some weight off of middle nodes and on to guard nodes. This potentially changes the weights we should give to guard nodes for the middle hop (see dir-spec.txt Section 3.5.3: https://gitweb.torproject.org/torspec.git/blob/HEAD:/dir-spec.txt#l1858).
However, the bandwidth authorities have consistently measured middle nodes as too slow, and Guard nodes as too fast, relative to the rest of the network on average. Exits come out just about even.
If I had to guess, most likely middle nodes are bogged down because nothing in the flag weight calculations takes into account the load from either dirport usage above, or hidden service usage. Directory usage is possible to estimate, but hidden service traffic sometimes involves Exits (from cannibalized circs), sometimes doesn't (from directly built internal circs), and it's nearly impossible to estimate how much of the network traffic it occupies...
So perhaps #6526 will magically correct this imbalance by shifting directory traffic from middle nodes to Guards. Or, perhaps it will be too much. We should keep an eye on the output of https://gitweb.torproject.org/torflow.git/blob/HEAD:/NetworkScanners/statsplitter.py either way as the directory guards code is deployed.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/8240Raise our guard rotation period, if appropriate2022-03-22T13:02:42ZRoger DingledineRaise our guard rotation period, if appropriateTariq's COGS paper from WPES 2012 shows that a significant component of guard churn is due to voluntary rotation, rather than actual network changes:
http://freehaven.net/anonbib/#wpes12-cogs
In short, if the target client makes sensiti...Tariq's COGS paper from WPES 2012 shows that a significant component of guard churn is due to voluntary rotation, rather than actual network changes:
http://freehaven.net/anonbib/#wpes12-cogs
In short, if the target client makes sensitive connections continuously every day for months, and you (the attacker) run some fast guards, the odds get pretty good that you'll become the client's guard at some point and get to do a correlation attack.
We could argue that the "continuously every day for months" assumption is unrealistic, so in practice we don't know how bad this issue really is. But for hidden services, it could well be a realistic assumption.
There are going to be (at least) two problems with raising the guard rotation period. The first is that we unbalance the network further wrt old guards vs new guards, and I'm not sure by how much, so I'm not sure how much our bwauth measurers will have to compensate. The second (related) problem is that we'll expand the period during which new guards don't get as much load as they will eventually get. This issue already results in confused relay operators trying to shed their Guard flag so they can resume having load.
In sum, if we raise the rotation period enough that it really results in load changes, then we could have unexpected side effects like having the bwauths raise the weights of new (and thus totally unloaded) guards to huge numbers, thus ensuring that anybody who rotates a guard will basically for sure get one of these new ones.
The real plan here needs a proposal, and should be for 0.2.5 or later. I wonder if we can raise it 'some but not too much' in the 0.2.4 timeframe though?Tor: 0.3.1.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/5752Isolate browser streams by url bar domain rather than by time interval2022-03-22T13:00:32ZRoger DingledineIsolate browser streams by url bar domain rather than by time intervalI'm creating this parent project ticket for all the components of Mike's "use the prop171 support in Tor to stop putting unrelated streams onto the same circuit" plan.I'm creating this parent project ticket for all the components of Mike's "use the prop171 support in Tor to stop putting unrelated streams onto the same circuit" plan.https://gitlab.torproject.org/legacy/trac/-/issues/5968Improve onion key and TLS management2022-03-22T12:59:29ZMike PerryImprove onion key and TLS managementAs a best practice behavior, a relay should check that the onion key it tried to publish is actually the one it sees in the consensus in which it appears.
The onion key should also be what authenticates the TLS key (rather than the iden...As a best practice behavior, a relay should check that the onion key it tried to publish is actually the one it sees in the consensus in which it appears.
The onion key should also be what authenticates the TLS key (rather than the identity key, as it is now).
This would prevent some utility vectors of identity key theft, where a non-targeted upstream MITM attempts to use a relays identity to impersonate it in order to execute a tagging attack (#5456).Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/11454If two auth certs are both old but were generated nearby in time, we keep both2022-03-22T12:56:36ZRoger DingledineIf two auth certs are both old but were generated nearby in time, we keep bothIn trusted_dirs_remove_old_certs() we check if
```
(cert_published + OLD_CERT_LIFETIME < newest_published)) {
```
when deciding whether to discard an old cert from our cache.
We don't check it at all with respect to current ...In trusted_dirs_remove_old_certs() we check if
```
(cert_published + OLD_CERT_LIFETIME < newest_published)) {
```
when deciding whether to discard an old cert from our cache.
We don't check it at all with respect to current time.
So if an authority generates a signing key in January, and then generates ten more signing keys within a week, and now it's April, we'll still keep all of them until they expire or until a new signing key shows up that's more than 7 days newer than them.
This cannot be the right logic.Tor: 0.2.6.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/11457Making a signing cert in the future will make everybody discard your real sig...2022-03-22T12:56:36ZRoger DingledineMaking a signing cert in the future will make everybody discard your real signing cert and then want it againRun an authority, with a normal signing authority_certificate. Then move your date into the future (has to be more than one week in the future), and generate and use another signing cert. Relays, clients, and other directory authorities ...Run an authority, with a normal signing authority_certificate. Then move your date into the future (has to be more than one week in the future), and generate and use another signing cert. Relays, clients, and other directory authorities will smoothly upgrade to your new one, and (barring issues like #11454) throw out your old signing cert.
Then throw out your shiny new one, and go back to the one you had been using. Other Tors (dir auths, relays, clients) will say "oh hey, a signature from a cert I don't recognize, let me fetch that". So far so good.
Then 60 seconds later they'll discard this cert, because they know a newer one. Oops.
But this is where is gets good. Your authority discards this older cert too. So do other authorities. And relays.
And then everybody wants a copy and nobody has one, so every 60 seconds everybody asks the next layer up in the dir hierarchy. Everybody's logs are filled with
```
Apr 09 03:44:55.000 [warn] Received http status code 404 ("Not found") from server '127.0.0.1:3002' while fetching "/tor/keys/fp-sk/AD23D263206B997C73AF9B488322E91766748C2C-4335577168B0C0C22AC4A1A0707DD72F41CC8DA6".
```
each minute.Tor: 0.2.6.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/11469Exit not using one hop circuit to Directory Server2022-03-22T12:56:22ZTracExit not using one hop circuit to Directory ServerI've set up a lab to learn about Tor. All nodes running within Xen 6.2 on FreeBSD 10 running Tor version 2.4.19.
All clients can build circuits and functionality looks as expected. However, while entry and relay nodes use the encrypted,...I've set up a lab to learn about Tor. All nodes running within Xen 6.2 on FreeBSD 10 running Tor version 2.4.19.
All clients can build circuits and functionality looks as expected. However, while entry and relay nodes use the encrypted, one-hop circuit to communicate with the Directory Server, the exit node does not. The exit node communicates directly with the dir port on the directory server (http). I'm using tcpdump -nvvv -A on the specific interfaces to see the traffic.
All nodes in the lab are essentially clones. The torrc file is changed on each node to reflect client, entry, relay, and exit roles. The only difference between the nodes that use the one-hop circuilt and the one that doesn't is the "accept" policy on the exit node. I don't see how that relates, but when I remove the "accept" policy and add a policy to "reject *:*" the one-hop circuit is then used . I've gone over this quite a bit. It may be a bug.
**Trac**:
**Username**: bburleyTor: 0.2.5.x-finalNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/legacy/trac/-/issues/14429Automated rounding of content window dimensions2022-03-17T21:14:08ZArthur EdelsteinAutomated rounding of content window dimensionsI've written a small patch for torbutton that forces the content ("gBrowser") to have dimensions be a multiple of 200x200. In other words, window.innerWidth and window.innerHeight, and similar calls, always return a rounded number.
This...I've written a small patch for torbutton that forces the content ("gBrowser") to have dimensions be a multiple of 200x200. In other words, window.innerWidth and window.innerHeight, and similar calls, always return a rounded number.
This should at least provide some protection to users who resize or maximize their Tor Browser window with JS activated.
I haven't dealt with the zooming issue here, but that would be an interesting next step.Arthur EdelsteinArthur Edelsteinhttps://gitlab.torproject.org/legacy/trac/-/issues/7164microdesc.c:378: Bug: microdesc_free() called, but md was still referenced 1 ...2022-03-17T20:05:40ZTracmicrodesc.c:378: Bug: microdesc_free() called, but md was still referenced 1 node(s); held_by_nodes == 1Oct 20 21:14:18.594 [Warning] microdesc_free(): Bug: microdesc_free() called, but md was still referenced 1 node(s); held_by_nodes == 1
**Trac**:
**Username**: jaj123Oct 20 21:14:18.594 [Warning] microdesc_free(): Bug: microdesc_free() called, but md was still referenced 1 node(s); held_by_nodes == 1
**Trac**:
**Username**: jaj123Tor: unspecified