Tor issueshttps://gitlab.torproject.org/tpo/core/tor/-/issues2024-01-08T13:31:05Zhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40678Crash on Windows after DNSPort request2024-01-08T13:31:05ZcypherpunksCrash on Windows after DNSPort requestTor 4.7.7, on Windows, crashes after receiving the following request on DNSPort.
`000201000001000000000000076578616d706c6503636f6d0000010001`
Full pcap:
<pre>
1MOyoQIABAAAAAAAAAAAAAAABAAAAAAAIq6cYnvfCQAiAAAAIgAAAAIAAABFAAAe
hk0AAIARAAB/...Tor 4.7.7, on Windows, crashes after receiving the following request on DNSPort.
`000201000001000000000000076578616d706c6503636f6d0000010001`
Full pcap:
<pre>
1MOyoQIABAAAAAAAAAAAAAAABAAAAAAAIq6cYnvfCQAiAAAAIgAAAAIAAABFAAAe
hk0AAIARAAB/AAABfwAAAfLIADYACg68AB0irpxiFeAJAD0AAAA9AAAAAgAAAEUA
ADmGTgAAgBEAAH8AAAF/AAAB8sgANgAlPzMAAgEAAAEAAAAAAAAHZXhhbXBsZQNj
b20AAAEAAQ==
</pre>Tor: 0.4.8.x-freezeAlexander Færøyahf@torproject.orgAlexander Færøyahf@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40893New conflux links every 30 sec when unused connections2024-01-25T21:08:13ZcypherpunksNew conflux links every 30 sec when unused connectionsAbout 30? minutes without use when tor drops the last connection, it creates 6 new Conflux_linked connections that drops after 30 seconds. Then replaces them with a new set of 6 Conflux_linked only for 30 seconds. And continues this loo...About 30? minutes without use when tor drops the last connection, it creates 6 new Conflux_linked connections that drops after 30 seconds. Then replaces them with a new set of 6 Conflux_linked only for 30 seconds. And continues this loop for ever until normal tor use is resumed.
latest commit tested: cec6f9919d3128646d85c75d08338bea4b31bffa
linux 6.4
This behavior exist at least a couple of months, before the adoption of the 4.8 series from the tor browser.Tor: 0.4.8.x-post-stableMike PerryMike Perryhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40832Memory consumption of Tor client is becoming difficult under iOS 50 MB RAM li...2023-10-09T15:34:54ZtlaMemory consumption of Tor client is becoming difficult under iOS 50 MB RAM limitation esp. during startup and with cached info### Summary
Evidence is gathering, that Tor under iOS' network extension memory limitation has trouble building circuits.
See https://github.com/guardianproject/orbot-apple/issues/71#issuecomment-1666818716 and comments before.
I'm pl...### Summary
Evidence is gathering, that Tor under iOS' network extension memory limitation has trouble building circuits.
See https://github.com/guardianproject/orbot-apple/issues/71#issuecomment-1666818716 and comments before.
I'm playing around with `MaxMemInQueues`, but as soon as I uppen it just a little from the 5MB we currently use, Jetsam starts to kill the Network Extension during startup circuit building. (At least in my Austrian environment.)
Starting with cashed information seems to become more and more difficult, too. I already had to put a "clear cache" button at the main screen, because people started to complain so much.
I witness this too, now, more and more often.
Do you see any possibility to reduce non-file-backed memory consumption during the startup phase?
If this continues to worsen, it will render Orbot iOS unusable.
I'm not sure if this is happening due to changes in the network or due to changes in the client code.
If this doesn't get better I will need to consider downgrading Tor to older versions again.
Esp. since I currently released Onion Browser version 3, which now relies on Orbot iOS completely and which currently makes a lot of users unhappy.
There's certain loopholes to the memory accounting in iOS. In this older article I wrote, there's some background to the Jetsam memory accounting:
https://benjaminerhart.com/2018/03/state-of-the-onion-ios/
Esp. helpful might be these:
- Use file-backed memory / operate on/stream files directly instead of loading everything and juggling that data around a lot. (Maybe we can do that in callbacks we can implement with Objective-C to make use of `NSCache`/`NSPurgableData` objects.)
- Provide a method which makes Tor give up unused memory. We could call that from a hook iOS provides.
### Steps to reproduce:
1. Install Orbot iOS
2. Start Orbot
3. Witness Tor protocol using left top button.
4. If under censored environment, Tor might not be able to build usable circuits at all, because `MaxMemInQueues` is set to 5 MByte.
5. Override and increase `MaxMemInQueues` in advanced settings.
6. Witness Network Extension crash during start. (App will show stopped status after a while.)
7. In non-restraint environments, manage a complete start. Confirm by surfing to check.torproject.org.
8. Stop Orbot.
9. Restart Orbot.
10. Restart will fail most of the time until "Clear Cache" is pressed.
### What is the current bug behavior?
### What is the expected behavior?
- Tor limits memory usage during startup phase, so we can increase `MaxMemInQueues`, so constrained environments work, too.
- Tor limits memory usage during loading of cached information, so it doesn't reach the 50 MB memory limit.
### Environment
| | |
|:-------- | --------:|
| tor | 0.4.7.13 |
| libevent | 2.1.12 |
| OpenSSL | 1.1.1u |
| liblzma | 5.4.3 |
iOS 16.6. using [Tor.framework](https://github.com/iCepa/Tor.framework/)
### Relevant logs and/or screenshots
https://github.com/guardianproject/orbot-apple/issues/71#issuecomment-1657738391
https://github.com/guardianproject/orbot-apple/issues/71#issuecomment-1666818716
### Possible fixesTor: 0.4.9.x-freezeAlexander Færøyahf@torproject.orgAlexander Færøyahf@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40791prop340: Implement packed and fragmented cells2023-09-19T18:12:28ZDavid Gouletdgoulet@torproject.orgprop340: Implement packed and fragmented cellsThis ticket is about implementing proposal 340 on the relay side of C-tor.
Important to note that we plan to only implement this support client side in arti.This ticket is about implementing proposal 340 on the relay side of C-tor.
Important to note that we plan to only implement this support client side in arti.Tor: 0.4.9.x-freezeDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/11101Bridges should report implementation versions of their pluggable transports2024-03-05T15:17:58ZRoger DingledineBridges should report implementation versions of their pluggable transportsOur bridges now run a variety of pluggable transports. What if there's a bug in, say, the Scramblesuit implementation (like it appears there is)? If we fix the bug, how do bridgedb or the Tor clients know whether the Scramblesuit bridge ...Our bridges now run a variety of pluggable transports. What if there's a bug in, say, the Scramblesuit implementation (like it appears there is)? If we fix the bug, how do bridgedb or the Tor clients know whether the Scramblesuit bridge they just learned about is one of the new (updated) ones or one of the old (buggy) ones?
One option would be for Tor to include a version for each supported PT in its bridge (or extrainfo) descriptor, so if we turn out to not want to use certain versions for certain situations, we can do it.
Are there better options than this one?Tor: 0.4.9.x-freezeDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40830Non-fatal assertion now >= leg->link_sent_usec failed2023-11-06T14:50:08ZVortNon-fatal assertion now >= leg->link_sent_usec failed### Summary
Sometimes I see such lines in log:
```
Aug 03 03:40:28.000 [warn] tor_bug_occurred_: Bug: conflux_pool.c:828: record_rtt_client: Non-fatal assertion now >= leg->link_sent_usec failed. (on Tor 0.4.8.2-alpha 328f976245134501)
...### Summary
Sometimes I see such lines in log:
```
Aug 03 03:40:28.000 [warn] tor_bug_occurred_: Bug: conflux_pool.c:828: record_rtt_client: Non-fatal assertion now >= leg->link_sent_usec failed. (on Tor 0.4.8.2-alpha 328f976245134501)
Aug 03 03:40:28.000 [warn] Bug: Tor 0.4.8.2-alpha (git-328f976245134501): Non-fatal assertion now >= leg->link_sent_usec failed in record_rtt_client at conflux_pool.c:828. (Stack trace not available) (on Tor 0.4.8.2-alpha 328f976245134501)
```
### Steps to reproduce:
Looks like it appears randomly.
However I have ntpd installed, maybe it interferes with Tor somehow.
### What is the current bug behavior?
Error shows.
### What is the expected behavior?
No error.
### Environment
_- Which version of Tor are you using?_
Tor 0.4.8.2-alpha (git-328f976245134501) running on Windows 7 with Libevent 2.1.12-stable, OpenSSL 3.0.8, Zlib 1.2.13, Liblzma 5.4.1, Libzstd 1.5.4 and Unknown N/A as libc.
_- Which operating system are you using?_
Windows 7 SP1 x64
_- Which installation method did you use?_
I built Tor from sources.
### Relevant logs and/or screenshots
I saw such problem two more times:
```
Jul 24 03:15:14.000 [warn] tor_bug_occurred_: Bug: conflux_pool.c:828: record_rtt_client: Non-fatal assertion now >= leg->link_sent_usec failed. (on Tor 0.4.8.2-alpha 328f976245134501)
Jul 24 03:15:14.000 [warn] Bug: Tor 0.4.8.2-alpha (git-328f976245134501): Non-fatal assertion now >= leg->link_sent_usec failed in record_rtt_client at conflux_pool.c:828. (Stack trace not available) (on Tor 0.4.8.2-alpha 328f976245134501)
...
Jul 24 18:16:48.000 [warn] tor_bug_occurred_: Bug: conflux_pool.c:828: record_rtt_client: Non-fatal assertion now >= leg->link_sent_usec failed. (on Tor 0.4.8.2-alpha 328f976245134501)
Jul 24 18:16:48.000 [warn] Bug: Tor 0.4.8.2-alpha (git-328f976245134501): Non-fatal assertion now >= leg->link_sent_usec failed in record_rtt_client at conflux_pool.c:828. (Stack trace not available) (on Tor 0.4.8.2-alpha 328f976245134501)
```
### Possible fixeshttps://gitlab.torproject.org/tpo/core/tor/-/issues/40578Let bridge users choose to only reach their first working bridge2024-02-28T17:55:43ZRoger DingledineLet bridge users choose to only reach their first working bridgeWe have some users in Russia who collect dozens or hundreds of obfs4 bridges, and when they start their Tor, it bursts out dozens/hundreds of connections at once to try to reach every single bridge and see which ones are working. That is...We have some users in Russia who collect dozens or hundreds of obfs4 bridges, and when they start their Tor, it bursts out dozens/hundreds of connections at once to try to reach every single bridge and see which ones are working. That is loud, wasteful, and maybe even dangerous.
In Snowflake (https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/28651) we are heading toward a world where Tor Browser users have k snowflake bridge lines, one per destination bridge, in order to scale up and improve resiliency. But the Snowflake people worry that doing more than one-ish Snowflake connection will be wasteful (since each connection involves a domain front, a stun connection, a webrtc handshake, etc) and also it will stand out on the network. So they are considering having Tor Browser choose just one Snowflake line at random for each user, which helps with the scaling but it discards all the resiliency features that we would be so close to getting.
I think the answer in both these cases is that we want an option in Tor that makes you only try to fetch bridge descriptors from the bridges you actually hope to use.
I expect the main two parts of this change will be:
* When considering launching a bridge descriptor fetch, decide if you would call this bridge one of your primary guards if it worked, and if not, don't fetch.
* As soon as any bridge fails, immediately go through and see if you need to launch any new descriptor fetches (because otherwise you could end up in a situation where your existing bridges failed and you aren't trying any new ones yet).
(I do think we want to retain the existing "try them all" behavior as an option too (maybe even the default? that's a decision we should make), first for the people who use bridges for connectivity because it gives you the best connectivity, and second because we use the "try them all" functionality in e.g. bridgestrap.)Roger DingledineRoger Dingledine