Tor issueshttps://gitlab.torproject.org/tpo/core/tor/-/issues2024-01-25T11:54:26Zhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40224Find a working alternative to using MaxMind's GeoLite2 databases2024-01-25T11:54:26ZKarsten LoesingFind a working alternative to using MaxMind's GeoLite2 databasesMaxMind has recently changed access and use of their GeoLite2 databases: https://blog.maxmind.com/2019/12/18/significant-changes-to-accessing-and-using-geolite2-databases/
This affects Onionoo and tor. I started a [thread on tor-dev@](h...MaxMind has recently changed access and use of their GeoLite2 databases: https://blog.maxmind.com/2019/12/18/significant-changes-to-accessing-and-using-geolite2-databases/
This affects Onionoo and tor. I started a [thread on tor-dev@](https://lists.torproject.org/pipermail/tor-dev/2020-January/014117.html) about this topic last week with some more details.
Let's use this ticket to brainstorm and discuss working alternatives to the way we used their databases in the past.Nick MathewsonNick Mathewsonhttps://gitlab.torproject.org/tpo/core/tor/-/issues/5166198.18.0.0/15 is reserved and in use by home routers2023-12-12T20:06:03ZRobert Ransom198.18.0.0/15 is reserved and in use by home routersA user showed up in #tor to ask for help with Tor determining its address incorrectly. Tor published 198.18.0.2 as his/her/its relay's IP address, even though that was not his public IP address.
According to whois, [IETF RFC 2544](http...A user showed up in #tor to ask for help with Tor determining its address incorrectly. Tor published 198.18.0.2 as his/her/its relay's IP address, even though that was not his public IP address.
According to whois, [IETF RFC 2544](http://www.rfc-editor.org/rfc/rfc2544.txt) reserved 198.18.0.0/15:
```
NetRange: 198.18.0.0 - 198.19.255.255
CIDR: 198.18.0.0/15
OriginAS:
NetName: SPECIAL-IPV4-BENCHMARK-TESTING-IANA-RESERVED
NetHandle: NET-198-18-0-0-1
Parent: NET-198-0-0-0-0
NetType: IANA Special Use
Comment: This block has been allocated for use in
Comment: benchmark tests of network interconnect
Comment: devices. This range was assigned to
Comment: minimize the chance of conflict in case a
Comment: testing device were to be accidentally
Comment: connected to part of the Internet.
Comment: Packets with source addresses from
Comment: this range are not meant to be forwarded
Comment: across the Internet.
Comment: This assignment was made by the IETF in
Comment: RFC 2544, which can be found at:
Comment: http://www.rfc-editor.org/rfc/rfc2544.txt
```
Tor should recognize addresses in that netblock as internal.
This is a potential security issue for users who run exit nodes behind screwy home routers which use that netblock for their private addresses.Tor: unspecifiedhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40871Tor incorrectly stores stats on incoming PT connections2023-12-10T21:38:18ZAlexander Færøyahf@torproject.orgTor incorrectly stores stats on incoming PT connections@trinity-1686a and @dcf discussed this issue on tor-dev@ in https://lists.torproject.org/pipermail/tor-dev/2023-October/014858.html
It seems like we have a bug after we updated our connectiong tracking code to track incoming connections...@trinity-1686a and @dcf discussed this issue on tor-dev@ in https://lists.torproject.org/pipermail/tor-dev/2023-October/014858.html
It seems like we have a bug after we updated our connectiong tracking code to track incoming connections earlier. We don't handle the transport name parameter of our eager call to `geoip_note_client_seen()`.
@trinity-1686a may potentially have a patch for this. I think it would be good if we could get some testing on this before we merge it.
Would you be up for running your Tor instance with a patch that potentially fixes this issue, @dcf ?Tor: 0.4.8.x-post-stabletrinity-1686atrinity-1686ahttps://gitlab.torproject.org/tpo/core/tor/-/issues/32032Assertion mp->conf_state == PT_PROTO_COMPLETED failed in managed_proxy_stdout...2023-12-02T19:30:03ZDavid Fifielddcf@torproject.orgAssertion mp->conf_state == PT_PROTO_COMPLETED failed in managed_proxy_stdout_callbackI'm using commit 0d82a8be77ae8d7fb06c8702bfbf1ebbaf370c94.
Create a fake server transport plugin called "test.sh" that only exits with an SMETHOD-ERROR. `chmod +x` it.
```
#!/bin/sh
echo "VERSION 1"
echo "SMETHOD-ERROR testpt failing AB...I'm using commit 0d82a8be77ae8d7fb06c8702bfbf1ebbaf370c94.
Create a fake server transport plugin called "test.sh" that only exits with an SMETHOD-ERROR. `chmod +x` it.
```
#!/bin/sh
echo "VERSION 1"
echo "SMETHOD-ERROR testpt failing ABCD"
```
Create a configuration file called "torrc.testpt".
```
PublishServerDescriptor 0
AssumeReachable
SOCKSPort 0
ORPort auto
ServerTransportPlugin testpt exec ./testpt.sh
Bridge testpt 127.0.0.1:9999
```
Run `tor -f torrc.testpt` and observe the following assertion failure:
```
Oct 10 15:56:22.000 [notice] Starting with guard context "default"
Oct 10 15:56:22.000 [warn] Server managed proxy encountered a method error. (testpt failing ABCD)
Oct 10 15:56:22.000 [warn] Managed proxy at './testpt.sh' failed the configuration protocol and will be destroyed.
Oct 10 15:56:22.000 [err] tor_assertion_failed_(): Bug: src/feature/client/transports.c:1836: managed_proxy_stdout_callback: Assertion mp->conf_state == PT_PROTO_COMPLETED failed; aborting. (on Tor 0.4.2.2-alpha-dev 0d82a8be77ae8d7f)
Oct 10 15:56:22.000 [err] Bug: Tor 0.4.2.2-alpha-dev (git-0d82a8be77ae8d7f): Assertion mp->conf_state == PT_PROTO_COMPLETED failed in managed_proxy_stdout_callback at src/feature/client/transports.c:1836: . Stack trace: (on Tor 0.4.2.2-alpha-dev 0d82a8be77ae8d7f)
Oct 10 15:56:22.000 [err] Bug: ./src/app/tor(log_backtrace_impl+0x56) [0x563a200ffaa6] (on Tor 0.4.2.2-alpha-dev 0d82a8be77ae8d7f)
Oct 10 15:56:22.000 [err] Bug: ./src/app/tor(tor_assertion_failed_+0x147) [0x563a200fab27] (on Tor 0.4.2.2-alpha-dev 0d82a8be77ae8d7f)
Oct 10 15:56:22.000 [err] Bug: ./src/app/tor(+0xd7994) [0x563a1ffbf994] (on Tor 0.4.2.2-alpha-dev 0d82a8be77ae8d7f)
Oct 10 15:56:22.000 [err] Bug: ./src/app/tor(+0x1e6883) [0x563a200ce883] (on Tor 0.4.2.2-alpha-dev 0d82a8be77ae8d7f)
Oct 10 15:56:22.000 [err] Bug: /usr/lib/x86_64-linux-gnu/libevent-2.1.so.6(+0x229ba) [0x7f036a38e9ba] (on Tor 0.4.2.2-alpha-dev 0d82a8be77ae8d7f)
Oct 10 15:56:22.000 [err] Bug: /usr/lib/x86_64-linux-gnu/libevent-2.1.so.6(event_base_loop+0x5a7) [0x7f036a38f537] (on Tor 0.4.2.2-alpha-dev 0d82a8be77ae8d7f)
Oct 10 15:56:22.000 [err] Bug: ./src/app/tor(do_main_loop+0xdb) [0x563a1ff5c15b] (on Tor 0.4.2.2-alpha-dev 0d82a8be77ae8d7f)
Oct 10 15:56:22.000 [err] Bug: ./src/app/tor(tor_run_main+0x1105) [0x563a1ff49b15] (on Tor 0.4.2.2-alpha-dev 0d82a8be77ae8d7f)
Oct 10 15:56:22.000 [err] Bug: ./src/app/tor(tor_main+0x3a) [0x563a1ff470ca] (on Tor 0.4.2.2-alpha-dev 0d82a8be77ae8d7f)
Oct 10 15:56:22.000 [err] Bug: ./src/app/tor(main+0x19) [0x563a1ff46c89] (on Tor 0.4.2.2-alpha-dev 0d82a8be77ae8d7f)
Oct 10 15:56:22.000 [err] Bug: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xeb) [0x7f0369dbb09b] (on Tor 0.4.2.2-alpha-dev 0d82a8be77ae8d7f)
Oct 10 15:56:22.000 [err] Bug: ./src/app/tor(_start+0x2a) [0x563a1ff46cda] (on Tor 0.4.2.2-alpha-dev 0d82a8be77ae8d7f)
Aborted
```
The same crash happens if you omit `VERSION 1` from testpt.sh.
```
#!/bin/sh
echo "SMETHOD-ERROR testpt failing ABCD"
```
I encountered this in practice with meek-server when it couldn't open its log file. It reports the failure to open a log file as an SMETHOD-ERROR, because in older versions of tor that was the only way to cause an error message to appear in the tor log file. In my case, the failure looked like this:
```
Oct 10 21:30:26 tor2 Tor-meek[2223]: Server managed proxy encountered a method error. (meek error opening log file: open /var/log/meek-server-meek.log: read-only file system)
Oct 10 21:30:26 tor2 Tor-meek[2223]: Managed proxy at '/usr/local/bin/meek-server' failed the configuration protocol and will be destroyed.
Oct 10 21:30:26 tor2 Tor-meek[2223]: tor_assertion_failed_(): Bug: ../src/feature/client/transports.c:1836: managed_proxy_stdout_callback: Assertion mp->conf_state == PT_PROTO_COMPLETED failed; aborting. (on Tor 0.4.1.6 )
Oct 10 21:30:26 tor2 Tor-meek[2223]: Bug: Assertion mp->conf_state == PT_PROTO_COMPLETED failed in managed_proxy_stdout_callback at ../src/feature/client/transports.c:1836: . Stack trace: (on Tor 0.4.1.6 )
...
```Tor: 0.4.2.x-finalAlexander Færøyahf@torproject.orgAlexander Færøyahf@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/31091Bug stracktrace when pluggable transport cannot bind to port2023-12-02T19:30:03Zs7rBug stracktrace when pluggable transport cannot bind to portI have just setup some new obfs4 fast bridges running latest obfs4proxy from yawning / master (locally compiled on my server) and Tor 0.4.2.0-alpha-dev and encountered a bug stack trace when started to try with the pluggable transport li...I have just setup some new obfs4 fast bridges running latest obfs4proxy from yawning / master (locally compiled on my server) and Tor 0.4.2.0-alpha-dev and encountered a bug stack trace when started to try with the pluggable transport listening on a port < 1024. This is well known, which is why even in the wiki page it is recommended and properly documented to use setcap in order to grant permission to the PT executable to bind to lower ports, but shouldn't it just warn and exit, instead of offering all this:
```
Jul 06 05:51:18.000 [warn] Server managed proxy encountered a method error. (obfs4 listen tcp <ipv4>:<port>: bind: permission denied)
Jul 06 05:51:18.000 [warn] Managed proxy at '/usr/local/bin/obfs4proxy' failed the configuration protocol and will be destroyed.
Jul 06 05:51:18.000 [err] tor_assertion_failed_(): Bug: ../src/feature/client/transports.c:1836: managed_proxy_stdout_callback: Assertion mp->conf_state == PT_PROTO_COMPLETED failed; aborting. (on Tor 0.4.2.0-alpha-dev )
Jul 06 05:51:18.000 [err] Bug: Assertion mp->conf_state == PT_PROTO_COMPLETED failed in managed_proxy_stdout_callback at ../src/feature/client/transports.c:1836: . Stack trace: (on Tor 0.4.2.0-alpha-dev )
Jul 06 05:51:18.000 [err] Bug: /usr/bin/tor(log_backtrace_impl+0x47) [0x559a576f5d87] (on Tor 0.4.2.0-alpha-dev )
Jul 06 05:51:18.000 [err] Bug: /usr/bin/tor(tor_assertion_failed_+0x147) [0x559a576f0ed7] (on Tor 0.4.2.0-alpha-dev )
Jul 06 05:51:18.000 [err] Bug: /usr/bin/tor(+0xd4a89) [0x559a575b8a89] (on Tor 0.4.2.0-alpha-dev )
Jul 06 05:51:18.000 [err] Bug: /usr/bin/tor(+0x1e4e4b) [0x559a576c8e4b] (on Tor 0.4.2.0-alpha-dev )
Jul 06 05:51:18.000 [err] Bug: /usr/lib/x86_64-linux-gnu/libevent-2.0.so.5(event_base_loop+0x6a0) [0x7f6f343265a0] (on Tor 0.4.2.0-alpha-dev )
Jul 06 05:51:18.000 [err] Bug: /usr/bin/tor(do_main_loop+0x105) [0x559a57555ed5] (on Tor 0.4.2.0-alpha-dev )
Jul 06 05:51:18.000 [err] Bug: /usr/bin/tor(tor_run_main+0x1245) [0x559a57543905] (on Tor 0.4.2.0-alpha-dev )
Jul 06 05:51:18.000 [err] Bug: /usr/bin/tor(tor_main+0x3a) [0x559a57540cfa] (on Tor 0.4.2.0-alpha-dev )
Jul 06 05:51:18.000 [err] Bug: /usr/bin/tor(main+0x19) [0x559a57540879] (on Tor 0.4.2.0-alpha-dev )
Jul 06 05:51:18.000 [err] Bug: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1) [0x7f6f32b7b2e1] (on Tor 0.4.2.0-alpha-dev )
Jul 06 05:51:18.000 [err] Bug: /usr/bin/tor(_start+0x2a) [0x559a575408ca] (on Tor 0.4.2.0-alpha-dev )
```
What if it just stopped here:
```
Jul 06 05:51:18.000 [warn] Server managed proxy encountered a method error. (obfs4 listen tcp <ip>:<port>: bind: permission denied)
Jul 06 05:51:18.000 [warn] Managed proxy at '/usr/local/bin/obfs4proxy' failed the configuration protocol and will be destroyed.
```
Or maybe even exit entirely so the operator can know something is really wrong.
Also, I don't know if this is related or not, I will try to make more tests to confirm or infirm this, but under Debian stable the bridges that are not run with setcap don't get a reasonable measured speed. Those with setcap that listen to lower ports have a bw of 2.5 - 3.5 MiB/s and those without setcap have a bw of 12 KiB/s - 70 KiB/s and they are all on the same infrastructure / hardware resources / internet speed. I will do some more digging to confirm this, right now 2 out of 2 obfs4 bridges that were configured without setcap did not get good bandwidth measurement even after 15 days of continuous uptime.Tor: 0.4.0.x-finalAlexander Færøyahf@torproject.orgAlexander Færøyahf@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40799segfault when combining lttng tracing and sandbox2023-11-02T18:55:30ZMicah Elizabeth Scottsegfault when combining lttng tracing and sandbox### Summary
I noticed this while trying to upgrade our CI from the end-of-life debian buster to the current stable debian bullseye. I haven't checked which external change specifically introduced the incompatibility between tracing and ...### Summary
I noticed this while trying to upgrade our CI from the end-of-life debian buster to the current stable debian bullseye. I haven't checked which external change specifically introduced the incompatibility between tracing and sandbox. I can reproduce this both in CI (with the updated distro) and on debian unstable in my development environment.
### Steps to reproduce:
1. Install dependencies (`apt install liblttng-ust-dev`)
2. Configure tor with lttng (--enable-tracing-instrumentation-lttng)
3. Run unit tests, either with a full `make test` or more to the point: `make -j8 src/test/test && src/test/test sandbox/is_active`
### What is the current bug behavior?
Segfault in unit tests. Looking closer, it's an abort() with maybe some secondary failures due to the signal handler changes being blocked. The main problem is that the `urcu_bp` lib is always treating membarrier() failures as totally fatal if it's determined that membarrier should work at init time.
This strace (below) shows the problem pretty succinctly, but going through the library source verifies this explanation.
### What is the expected behavior?
I would appreciate guidance on this but my read of the situation is that it would be appropriate to just allow membarrier inside the sandbox. If we don't want to do this, we need some way to fail better. Either we need to prevent lttng from being initialized until we're inside the sandbox, or we need to block membarrier() early on maybe.
### Environment
- Verified on the 0.4.7 maintenance branch and on main, and on both debian unstable and bullseye. Does not occur on debian buster, which is why we don't see this in CI yet.
### Relevant logs and/or screenshots
```
beth@potato ~/git/tor (git)-[ci-deb-bullseye-047-mr] % strace -e membarrier -f src/test/test sandbox/is_active
membarrier(MEMBARRIER_CMD_QUERY, 0) = 0x1ff (MEMBARRIER_CMD_GLOBAL|MEMBARRIER_CMD_GLOBAL_EXPEDITED|MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED|MEMBARRIER_CMD_PRIVATE_EXPEDITED|MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED|MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE|MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE|MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ|MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ)
membarrier(MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED, 0) = 0
strace: Process 3591653 attached
strace: Process 3591654 attached
sandbox/is_active: [forking] strace: Process 3591710 attached
[pid 3591710] membarrier(MEMBARRIER_CMD_PRIVATE_EXPEDITED, 0) = -1 EPERM (Operation not permitted)
[pid 3591710] --- SIGSEGV {si_signo=SIGSEGV, si_code=SI_KERNEL, si_addr=NULL} ---
[pid 3591710] +++ killed by SIGSEGV (core dumped) +++
[pid 3591652] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_DUMPED, si_pid=3591710, si_uid=1000, si_status=SIGSEGV, si_utime=55 /* 0.55 s */, si_stime=6 /* 0.06 s */} ---
[did not exit cleanly.]
[is_active FAILED]
1/1 TESTS FAILED. (0 skipped)
[pid 3591652] membarrier(MEMBARRIER_CMD_PRIVATE_EXPEDITED, 0) = 0
[pid 3591652] membarrier(MEMBARRIER_CMD_PRIVATE_EXPEDITED, 0) = 0
[pid 3591652] membarrier(MEMBARRIER_CMD_PRIVATE_EXPEDITED, 0) = 0
[pid 3591652] membarrier(MEMBARRIER_CMD_PRIVATE_EXPEDITED, 0) = 0
==3591652==LeakSanitizer has encountered a fatal error.
==3591652==HINT: For debugging, try setting environment variable LSAN_OPTIONS=verbosity=1:log_threads=1
==3591652==HINT: LeakSanitizer does not work under ptrace (strace, gdb, etc)
[pid 3591654] +++ exited with 1 +++
[pid 3591653] +++ exited with 1 +++
+++ exited with 1 +++
```
### Possible fixes
1. Allow membarrier in sandbox
2. Disallow tracing + sandbox?
3. Block membarrier early
4. Init tracing laterMicah Elizabeth ScottMicah Elizabeth Scotthttps://gitlab.torproject.org/tpo/core/tor/-/issues/8235pathbias_count_build_success()2023-10-12T16:44:35Zcypherpunkspathbias_count_build_success()00:02:09 [NOTICE] pathbias_count_build_success(): Bug: Unexpectedly high successes counts (203.784282/203.421875) for guard ...
00:01:52 [NOTICE] pathbias_count_build_success(): Bug: Unexpectedly high successes counts (202.784282/202.421...00:02:09 [NOTICE] pathbias_count_build_success(): Bug: Unexpectedly high successes counts (203.784282/203.421875) for guard ...
00:01:52 [NOTICE] pathbias_count_build_success(): Bug: Unexpectedly high successes counts (202.784282/202.421875) for guard ...
00:01:46 [NOTICE] pathbias_count_build_success(): Bug: Unexpectedly high successes counts (201.784282/201.421875) for guard ...
00:01:01 [NOTICE] pathbias_count_build_success(): Bug: Unexpectedly high successes counts (200.784282/200.421875) for guard ...
00:00:58 [NOTICE] pathbias_count_build_success(): Bug: Unexpectedly high successes counts (199.784282/199.421875) for guard ...
00:00:41 [NOTICE] pathbias_count_build_success(): Bug: Unexpectedly high successes counts (198.784282/198.421875) for guard ...
14:54:21 [NOTICE] pathbias_count_build_success(): Bug: Unexpectedly high successes counts (179.568564/158.843750) for guard ...
14:54:15 [NOTICE] pathbias_count_build_success(): Bug: Unexpectedly high successes counts (178.568564/158.843750) for guard ...
14:54:03 [NOTICE] pathbias_count_build_success(): Bug: Unexpectedly high successes counts (177.568564/156.843750) for guard ...
14:54:02 [NOTICE] pathbias_count_build_success(): Bug: Unexpectedly high successes counts (176.568564/155.843750) for guard ...
14:54:02 [NOTICE] pathbias_count_build_success(): Bug: Unexpectedly high successes counts (175.568564/155.843750) for guard ...
14:53:53 [NOTICE] pathbias_count_build_success(): Bug: Unexpectedly high successes counts (174.568564/154.843750) for guard ...
Tor Version: Tor-0.2.4.10-alpha.
This trend repeats with the bias increasing on both sides.
The guard did not change. I have removed it for security reasons. I'll provide if you need it, however.Tor: 0.2.4.x-finalMike PerryMike Perryhttps://gitlab.torproject.org/tpo/core/tor/-/issues/8872Bug: Unexpectedly high successes counts (174.000000/173.000000)2023-10-12T16:44:34ZTracBug: Unexpectedly high successes counts (174.000000/173.000000)[Mon May 13 07:55:33 2013] Tor Software Error - The Tor software encountered an internal bug. Please report the following error message to the Tor developers at bugs.torproject.org: "pathbias_count_build_success(): Bug: Unexpectedly high...[Mon May 13 07:55:33 2013] Tor Software Error - The Tor software encountered an internal bug. Please report the following error message to the Tor developers at bugs.torproject.org: "pathbias_count_build_success(): Bug: Unexpectedly high successes counts (174.000000/173.000000) for guard gurgle ($948CDA1CE63D2165567B81706CD8C0E9F8934A47)
"
**Trac**:
**Username**: LoneRanger1012Tor: 0.2.4.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/30085Bug: Unexpectedly high use successes counts (101.500000/100.000000) for guard2023-10-12T16:44:33ZcypherpunksBug: Unexpectedly high use successes counts (101.500000/100.000000) for guardI repeatly see counters above current? Log:
Bug: Unexpectedly high use successes counts (101.500000/100.000000) for guard $F6740DEABFD5F62612FA025A5079EA72846B1F67 ($F6740DEABFD5F62612FA025A5079EA72846B1F67) (on Tor 0.3.4.9 4ac3ccf2863b8...I repeatly see counters above current? Log:
Bug: Unexpectedly high use successes counts (101.500000/100.000000) for guard $F6740DEABFD5F62612FA025A5079EA72846B1F67 ($F6740DEABFD5F62612FA025A5079EA72846B1F67) (on Tor 0.3.4.9 4ac3ccf2863b86e7)https://gitlab.torproject.org/tpo/core/tor/-/issues/15938Keep a separate onion-service cache for each isolation context2023-10-10T11:48:27ZteorKeep a separate onion-service cache for each isolation contextAnyone who can connect to a tor client can discover which HSs have been accessed recently, by running a timing attack against the HS cache. Cached descriptors return much faster than uncached descriptors.
This may be possible through br...Anyone who can connect to a tor client can discover which HSs have been accessed recently, by running a timing attack against the HS cache. Cached descriptors return much faster than uncached descriptors.
This may be possible through browser JavaScript attempting HS connections and timing the responses.
An observer on the network or in control of an HSDir could potentially enhance this timing attack with network request correlation.
Yawning suggests a cache for each stream-isolation context, to avoid this issue.
Each stream-isolation cache would most likely have 0 or 1 HS descriptor in it - 0 if the URL is not a HS, and 1 if it is.https://gitlab.torproject.org/tpo/core/tor/-/issues/24454sandbox failure on arm642023-09-18T13:14:52Zweasel (Peter Palfrader)sandbox failure on arm64With legacy/trac#24424 fixed, Tor builds but it still does not run:
```
$ ./src/or/tor Sandbox 1
Nov 28 08:35:29.521 [notice] Tor 0.3.2.5-alpha (git-d499a5a708f7298b) running on Linux with Libevent 2.1.8-stable, OpenSSL 1.1.0g, Zlib 1.2...With legacy/trac#24424 fixed, Tor builds but it still does not run:
```
$ ./src/or/tor Sandbox 1
Nov 28 08:35:29.521 [notice] Tor 0.3.2.5-alpha (git-d499a5a708f7298b) running on Linux with Libevent 2.1.8-stable, OpenSSL 1.1.0g, Zlib 1.2.8, Liblzma 5.2.2, and Libzstd 1.3.2.
Nov 28 08:35:29.521 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning
Nov 28 08:35:29.521 [notice] This version is not a stable Tor release. Expect more bugs than usual.
Nov 28 08:35:29.521 [notice] Configuration file "/usr/local/etc/tor/torrc" not present, using reasonable defaults.
Nov 28 08:35:29.525 [notice] Scheduler type KIST has been enabled.
Nov 28 08:35:29.525 [notice] Opening Socks listener on 127.0.0.1:9050
Nov 28 08:35:30.000 [notice] Bootstrapped 0%: Starting
============================================================ T= 1511858130
(Sandbox) Caught a bad syscall attempt (syscall unlinkat)
./src/or/tor(+0x1aa4ac)[0xaaaaccfd54ac]
linux-vdso.so.1(__kernel_rt_sigreturn+0x0)[0xffff8806e6c0]
/lib/aarch64-linux-gnu/libc.so.6(unlink+0x14)[0xffff87a4953c]
```
under strace:
```
$ strace -f ./src/or/tor DisableDebuggerAttachment 0 Sandbox 1
[...]
getpid() = 25468
getpid() = 25468
write(1, "Nov 28 08:36:05.000 [notice] Boo"..., 55Nov 28 08:36:05.000 [notice] Bootstrapped 0%: Starting
) = 55
unlinkat(AT_FDCWD, "/home/weasel/.tor/key-pinning-entries", 0) = -1 ENETDOWN (Network is down)
--- SIGSYS {si_signo=SIGSYS, si_code=SYS_SECCOMP, si_call_addr=0xffffa45d253c, si_syscall=__NR_unlinkat, si_arch=AUDIT_ARCH_AARCH64} ---
write(1, "\n==============================="..., 64
============================================================ T=) = 64
write(1, " 1511858165", 11 1511858165) = 11
write(1, "\n", 1
) = 1
write(1, "(Sandbox) Caught a bad syscall a"..., 48(Sandbox) Caught a bad syscall attempt (syscall ) = 48
[..]
```Tor: unspecifiedhttps://gitlab.torproject.org/tpo/core/tor/-/issues/26768Support onionbalance in HSv32023-09-12T16:59:54ZGeorge KadianakisSupport onionbalance in HSv3We are implementing onionbalance in v3! This is the master ticket.
[Description changed to not confuse people with the old design.]We are implementing onionbalance in v3! This is the master ticket.
[Description changed to not confuse people with the old design.]Tor: unspecifiedGeorge KadianakisGeorge Kadianakishttps://gitlab.torproject.org/tpo/core/tor/-/issues/40833Segfault and buffer overrun with aarch64 hashx compiler2023-09-05T16:45:09ZMicah Elizabeth ScottSegfault and buffer overrun with aarch64 hashx compiler### Summary
I just found this overrun while fuzzing the hashx program generator. I'll explain below, but while it's worrying I don't think it's exploitable for arbitrary code execution and even a denial-of-service level of exploitation ...### Summary
I just found this overrun while fuzzing the hashx program generator. I'll explain below, but while it's worrying I don't think it's exploitable for arbitrary code execution and even a denial-of-service level of exploitation would require a huge amount of luck.
I think we can safely mark this as non-confidential, but let's think about the impact first.
This is applicable to tor daemons (either client or service) running on aarch64, which have the proof of work module available (configured with `--enable-gpl` and not `--disable-module-pow`, and which use the compiled hashx implementation (`CompiledProofOfWorkHash` torrc option not set to 0).
The impact is different for clients and for services. Clients choose their own nonce value just before generating a program, so even if a method were discovered to trigger this bug by choosing a particular HashX seed, this would be negated by the client's choice of a random 16-byte nonce to include in that seed.
Services could be exploited if a method were discovered to trigger this bug by choosing a particular 16-byte nonce value, given the server's existing seed, during the window when that seed is still valid. I think this implies finding a cryptographic vulnerability in blake2b.
The actual issue occurs due to the hashx compiler being cavalier with buffer limits while assembling code. It relies on a constant buffer size without checking at runtime, and that buffer was sized for x86_64 without regard for the larger code size hashx generates on aarch64. The inline immediate encoding on aarch64 is quite large, so programs with a lot of constant operands can overflow what's effectively a hardcoded 1-page buffer. This will almost immediately cause a segfault when we execute code that's larger than one page from a buffer that's only had its first page made executable.
### Steps to reproduce:
1. Current repro case is on aarch64 linux, using the Arti fuzzers
2. `$ base64 -d | zcat > 6c7b0e5b1fb8965d11227270e892f3d8cd56514d`
```
H4sIAAAAAAAAA4uLA4OwuJmMcQwQNgODAZg+ZXfK9hQGQFH7HwI44k5hUfry1KlYCOscVGccmGZi
gAI5dIUQ+xetXBjD+It0xyDUYig15QbaoYnsGCIA2DhboHFxmHJA43zQjRsyDkV2ErF2IMA0bgYG
VBGwo2FRjcencG0nYE5DSR9YATw8D0IFGMFJh0kdn6eoZjsMRAPxKyLCOetUBJJD4DKkWYQaoRqQ
7HKKiGhWBtqPxRoiI5qBwRRiPG4HIyweEu5EzV+DsdiRt3vgg+k/6uRmBlh8DMYEP5iTzamhkLxH
iyF0dw4256GVPpB6i0FgEJU+2P1HtbbEaDgNyjYXCmfw1AgoFg21smcQuxPEAACQRCtf5Q0AAA==
```
3. `~/src/arti/crates/hashx/fuzz$ cargo +nightly fuzz run rng ./6c7b0e5b1fb8965d11227270e892f3d8cd56514d`
4. Observe a segfault at page boundary,
```
==6024==ERROR: AddressSanitizer: SEGV on unknown address 0xffff94326000 (pc 0xffff94326000 bp 0xffffeaac9910 sp 0xffffeaac9910 T0)
==6024==The signal is caused by a READ memory access.
==6024==Hint: PC is at a non-executable region. Maybe a wild jump?
#0 0xffff94326000 (<unknown module>)
#1 0xaaaae1538f84 in tor_c_equix::HashX::exec::h7063e9b25db75774 /home/mobian/src/tor/src/ext/equix/src/lib.rs:89:22
#2 0xaaaae1538f84 in rng::test_instance_c::_$u7b$$u7b$closure$u7d$$u7d$::h196a66e93420385a /home/mobian/src/arti/crates/hashx/fuzz/fuzz_targets/rng.rs:174:30
```
5. Retry in gdb for more information, observe that the crash is near the end of the hash function (which is only slightly larger than one page)
```
Thread 1 "rng" received signal SIGSEGV, Segmentation fault.
0x0000fffff7d56000 in ?? ()
...
(gdb) x/30i $pc
=> 0xfffff7d56000: str x6, [x8, #48]
0xfffff7d56004: str x7, [x8, #56]
0xfffff7d56008: ret
0xfffff7d5600c: udf #65535
...
```
### What is the current bug behavior?
Brief overwrite of some memory after the HashX program buffer, almost immediately followed by a segfault.
### What is the expected behavior?
We should be using a larger buffer clearly, but we also really shouldn't let hashx be quite so cavalier with the buffer write operations. Even if we aren't checking bounds for every single-byte write, I think we should define an upper limit on bytes-per-instruction and then use that value both to size the buffer and to check bounds just before each instruction is emitted.
### Environment
Running tor from `main`, on mainline aarch64 linux.Tor: 0.4.8.x-stableMicah Elizabeth ScottMicah Elizabeth Scotthttps://gitlab.torproject.org/tpo/core/tor/-/issues/40523tor_bug_occurred_(): Bug: ../src/feature/relay/relay_find_addr.c:225: relay_a...2023-08-29T11:07:32Zcomputer_freaktor_bug_occurred_(): Bug: ../src/feature/relay/relay_find_addr.c:225: relay_addr_learn_from_dirauth: Non-fatal assertion !(!ei) failed. (on Tor 0.4.6.8 )An obfs4 bridge on a `Raspberry Pi 3 Model B Rev 1.2` with `Raspberry Pi OS 64-bit beta` on `Debian 11`.
I have only `notice` logs:
```
Nov 25 02:58:44.000 [notice] Tor 0.4.6.8 opening log file.
Nov 25 02:58:44.941 [notice] We compiled ...An obfs4 bridge on a `Raspberry Pi 3 Model B Rev 1.2` with `Raspberry Pi OS 64-bit beta` on `Debian 11`.
I have only `notice` logs:
```
Nov 25 02:58:44.000 [notice] Tor 0.4.6.8 opening log file.
Nov 25 02:58:44.941 [notice] We compiled with OpenSSL 101010bf: OpenSSL 1.1.1k 25 Mar 2021 and we are running with OpenSSL 101010bf: 1.1.1k. These two versions should be binary compatible.
Nov 25 02:58:44.948 [notice] Tor 0.4.6.8 running on Linux with Libevent 2.1.12-stable, OpenSSL 1.1.1k, Zlib 1.2.11, Liblzma 5.2.5, Libzstd 1.4.8 and Glibc 2.31 as libc.
Nov 25 02:58:44.948 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning
Nov 25 02:58:44.949 [notice] Read configuration file "/usr/share/tor/tor-service-defaults-torrc".
Nov 25 02:58:44.949 [notice] Read configuration file "/etc/tor/torrc".
Nov 25 02:58:44.953 [notice] Based on detected system memory, MaxMemInQueues is set to 682 MB. You can override this by setting MaxMemInQueues by hand.
Nov 25 02:58:44.958 [notice] Opening Metrics listener on 127.0.0.1:x
Nov 25 02:58:44.959 [notice] Opened Metrics listener connection (ready) on 127.0.0.1:x
Nov 25 02:58:44.959 [notice] Opening OR listener on 0.0.0.0:x
Nov 25 02:58:44.959 [notice] Opened OR listener connection (ready) on 0.0.0.0:x
Nov 25 02:58:44.959 [notice] Opening OR listener on [::]:x
Nov 25 02:58:44.959 [notice] Opened OR listener connection (ready) on [::]:x
Nov 25 02:58:44.959 [notice] Opening Extended OR listener on 127.0.0.1:x
Nov 25 02:58:44.959 [notice] Extended OR listener listening on port x.
Nov 25 02:58:44.959 [notice] Opened Extended OR listener connection (ready) on 127.0.0.1:x
Nov 25 02:58:46.000 [notice] Parsing GEOIP IPv4 file /usr/share/tor/geoip.
Nov 25 02:58:47.000 [notice] Parsing GEOIP IPv6 file /usr/share/tor/geoip6.
Nov 25 02:58:47.000 [notice] Configured to measure statistics. Look for the *-stats files that will first be written to the data directory in 24 hours from now.
Nov 25 02:58:48.000 [notice] Your Tor server's identity key fingerprint is 'x'
Nov 25 02:58:48.000 [notice] Your Tor bridge's hashed identity key fingerprint is 'x'
Nov 25 02:58:48.000 [notice] Your Tor server's identity key ed25519 fingerprint is 'x'
Nov 25 02:58:48.000 [notice] You can check the status of your bridge relay at https://bridges.torproject.org/status?id=x
Nov 25 02:58:48.000 [notice] Bootstrapped 0% (starting): Starting
Nov 25 02:59:40.000 [notice] Starting with guard context "default"
Nov 25 02:59:40.000 [notice] Signaled readiness to systemd
Nov 25 02:59:40.000 [notice] Bootstrapped 5% (conn): Connecting to a relay
Nov 25 02:59:40.000 [notice] Registered server transport 'obfs4' at '[::]:x'
Nov 25 02:59:40.000 [notice] Bootstrapped 10% (conn_done): Connected to a relay
Nov 25 02:59:40.000 [notice] Bootstrapped 14% (handshake): Handshaking with a relay
Nov 25 02:59:40.000 [notice] Bootstrapped 15% (handshake_done): Handshake with a relay done
Nov 25 02:59:40.000 [notice] Bootstrapped 45% (requesting_descriptors): Asking for relay descriptors
Nov 25 02:59:41.000 [notice] Bootstrapped 50% (loading_descriptors): Loading relay descriptors
Nov 25 02:59:41.000 [notice] Opening Control listener on /run/tor/control
Nov 25 02:59:47.000 [notice] Opened Control listener connection (ready) on /run/tor/control
Nov 25 02:59:48.000 [notice] Unable to find IPv4 address for ORPort x. You might want to specify IPv6Only to it or set an explicit address or set Address.
Nov 25 02:59:48.000 [warn] tor_bug_occurred_(): Bug: ../src/feature/relay/relay_find_addr.c:225: relay_addr_learn_from_dirauth: Non-fatal assertion !(!ei) failed. (on Tor 0.4.6.8 )
Nov 25 02:59:48.000 [warn] Bug: Tor 0.4.6.8: Non-fatal assertion !(!ei) failed in relay_addr_learn_from_dirauth at ../src/feature/relay/relay_find_addr.c:225. Stack trace: (on Tor 0.4.6.8 )
Nov 25 02:59:48.000 [warn] Bug: /usr/bin/tor(log_backtrace_impl+0x6c) [0x55870d3f40] (on Tor 0.4.6.8 )
Nov 25 02:59:48.000 [warn] Bug: /usr/bin/tor(tor_bug_occurred_+0x15c) [0x55870dfb9c] (on Tor 0.4.6.8 )
Nov 25 02:59:48.000 [warn] Bug: /usr/bin/tor(relay_addr_learn_from_dirauth+0x1c8) [0x55871fb268] (on Tor 0.4.6.8 )
Nov 25 02:59:48.000 [warn] Bug: /usr/bin/tor(+0x9e420) [0x558708e420] (on Tor 0.4.6.8 )
Nov 25 02:59:48.000 [warn] Bug: /usr/bin/tor(router_build_fresh_descriptor+0x40) [0x558708e6f0] (on Tor 0.4.6.8 )
Nov 25 02:59:48.000 [warn] Bug: /usr/bin/tor(router_rebuild_descriptor+0x8c) [0x558708eb3c] (on Tor 0.4.6.8 )
Nov 25 02:59:48.000 [warn] Bug: /usr/bin/tor(consider_publishable_server+0x60) [0x558708efb0] (on Tor 0.4.6.8 )
Nov 25 02:59:48.000 [warn] Bug: /usr/bin/tor(+0x20c748) [0x55871fc748] (on Tor 0.4.6.8 )
Nov 25 02:59:48.000 [warn] Bug: /usr/bin/tor(+0x80a1c) [0x5587070a1c] (on Tor 0.4.6.8 )
Nov 25 02:59:48.000 [warn] Bug: /usr/lib/aarch64-linux-gnu/libevent-2.1.so.7(+0x23600) [0x7fbba7b600] (on Tor 0.4.6.8 )
Nov 25 02:59:48.000 [warn] Bug: /usr/lib/aarch64-linux-gnu/libevent-2.1.so.7(event_base_loop+0x50c) [0x7fbba7bf84] (on Tor 0.4.6.8 )
Nov 25 02:59:48.000 [warn] Bug: /usr/bin/tor(do_main_loop+0xec) [0x55870584c0] (on Tor 0.4.6.8 )
Nov 25 02:59:48.000 [warn] Bug: /usr/bin/tor(tor_run_main+0x1c0) [0x5587053b64] (on Tor 0.4.6.8 )
Nov 25 02:59:48.000 [warn] Bug: /usr/bin/tor(tor_main+0x54) [0x5587050044] (on Tor 0.4.6.8 )
Nov 25 02:59:48.000 [warn] Bug: /usr/bin/tor(main+0x20) [0x558704fb30] (on Tor 0.4.6.8 )
Nov 25 02:59:48.000 [warn] Bug: /lib/aarch64-linux-gnu/libc.so.6(__libc_start_main+0xe8) [0x7fbb3a9218] (on Tor 0.4.6.8 )
Nov 25 02:59:48.000 [warn] Bug: /usr/bin/tor(+0x5fbb8) [0x558704fbb8] (on Tor 0.4.6.8 )
Nov 25 02:59:48.000 [notice] Bootstrapped 55% (loading_descriptors): Loading relay descriptors
Nov 25 02:59:48.000 [notice] Bootstrapped 62% (loading_descriptors): Loading relay descriptors
Nov 25 02:59:48.000 [notice] Bootstrapped 67% (loading_descriptors): Loading relay descriptors
Nov 25 02:59:49.000 [notice] Bootstrapped 75% (enough_dirinfo): Loaded enough directory info to build circuits
Nov 25 02:59:49.000 [notice] Bootstrapped 80% (ap_conn): Connecting to a relay to build circuits
Nov 25 02:59:49.000 [notice] Bootstrapped 85% (ap_conn_done): Connected to a relay to build circuits
Nov 25 02:59:50.000 [notice] Bootstrapped 89% (ap_handshake): Finishing handshake with a relay to build circuits
Nov 25 02:59:50.000 [notice] Bootstrapped 90% (ap_handshake_done): Handshake finished with a relay to build circuits
Nov 25 02:59:50.000 [notice] Bootstrapped 95% (circuit_create): Establishing a Tor circuit
Nov 25 02:59:50.000 [notice] Bootstrapped 100% (done): Done
```
I manually restarted Tor and it did not happened again.Tor: 0.4.7.x-post-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/29245Tor 0.4 eventually hits "Delaying directory fetches: No running bridges" afte...2023-08-01T23:52:42ZTracTor 0.4 eventually hits "Delaying directory fetches: No running bridges" after some period of inactivity with bridges```
Tor NOTICE: Delaying directory fetches: No running bridges
Tor NOTICE: Application request when we haven't received a consensus with exits. Optimistically trying known bridges again.
Tor NOTICE: Delaying directory fetches: No runni...```
Tor NOTICE: Delaying directory fetches: No running bridges
Tor NOTICE: Application request when we haven't received a consensus with exits. Optimistically trying known bridges again.
Tor NOTICE: Delaying directory fetches: No running bridges
Tor NOTICE: Application request when we haven't received a consensus with exits. Optimistically trying known bridges again.
Tor NOTICE: Delaying directory fetches: No running bridges
Tor NOTICE: Application request when we haven't received a consensus with exits. Optimistically trying known bridges again.
```
Tested on latest Tor Browser alpha with snowflake bridge.
**Trac**:
**Username**: ArmalsLoveArmalsLifeTor: unspecifiedhttps://gitlab.torproject.org/tpo/core/tor/-/issues/11301Tor does not reconnect after network loss with guards used as bridges2023-08-01T19:36:47ZMike PerryTor does not reconnect after network loss with guards used as bridgesYawning and I have both noticed that tor can become unresponsive if either normal tor bridges or PT bridges are configured, and the client suffers a network connectivity loss. After sustained network connectivity loss, all of the orconns...Yawning and I have both noticed that tor can become unresponsive if either normal tor bridges or PT bridges are configured, and the client suffers a network connectivity loss. After sustained network connectivity loss, all of the orconns end up closed, and Tor will not try to reconnect to its bridges, even when new stream attempts arrive.
It is possible that Tor is simply marking all of its bridges down in this case, and is not trying to reconnect to them when the network connectivity returns, thinking they are still down?
The only way to solve this issue is to either send "SIGNAL HUP" to the control port, or to kill -HUP `pidof tor`. After recieving the HUP signal, tor immediately launches new orconns and circuits for its bridges, and attaches the currently pending streams to these new circuits.
Sometimes, after this problem has happened once, tor will cease building circuits even if the network remains available.
This is extremely bad for usability, because TBB becomes completely unusable in this case, and the only thing a normal user can do is exit the whole browser and re-launch it.
This may also indicate a deeper bug with how Tor handles the liveness/'down' status of normal Guard nodes, and may cause Tor to rotate Guards more frequently than necessary.Tor: unspecifiedhttps://gitlab.torproject.org/tpo/core/tor/-/issues/2511Tor will use an unconfigured bridge if it was a configured bridge last time y...2023-07-16T20:35:19ZRoger DingledineTor will use an unconfigured bridge if it was a configured bridge last time you ran TorIf you configure your Tor client with
```
usebridges 1
bridge 128.31.0.34:9009
```
and you run it and it works, then Tor will end up writing two things to disk: 1) a @purpose bridge descriptor for 128.31.0.34 in your cached-descriptors f...If you configure your Tor client with
```
usebridges 1
bridge 128.31.0.34:9009
```
and you run it and it works, then Tor will end up writing two things to disk: 1) a @purpose bridge descriptor for 128.31.0.34 in your cached-descriptors file:
```
@downloaded-at 2011-02-08 07:54:52
@source "128.31.0.34"
@purpose bridge
router bridge 128.31.0.34 9009 0 0
...
```
and 2) an entry guard stanza in your state file:
```
EntryGuard bridge 4C17FB532E20B2A8AC199441ECD2B0177B39E4B1
EntryGuardAddedBy 4C17FB532E20B2A8AC199441ECD2B0177B39E4B1 0.2.3.0-alpha-dev 2011-02-01 18:43:23
```
Then if you kill your Tor and run it with
```
usebridges 1
bridge 150.150.150.150:9009
```
it will successfully bootstrap -- using the bridge that worked before but isn't your requested bridge.Tor: 0.2.2.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/18356obfs4proxy cannot bind to <1024 port with systemd hardened service unit2023-07-12T04:56:49ZTracobfs4proxy cannot bind to <1024 port with systemd hardened service unitHi,
I was running an obfs4proxy Debian Wheezy bridge with such configuration in torrc:
```
ServerTransportListenAddr obfs4 0.0.0.0:443
```
When I dist-upgraded to Debian Jessie, obfs4proxy could not bind to :443 any more, while tor l...Hi,
I was running an obfs4proxy Debian Wheezy bridge with such configuration in torrc:
```
ServerTransportListenAddr obfs4 0.0.0.0:443
```
When I dist-upgraded to Debian Jessie, obfs4proxy could not bind to :443 any more, while tor logs had such messages:
```
Feb 21 22:51:09.000 [warn] Server managed proxy encountered a method error. (obfs4 listen tcp 0.0.0.0:443: bind: permission denied)
Feb 21 22:51:09.000 [warn] Managed proxy at '/usr/bin/obfs4proxy' failed the configuration protocol and will be destroyed.
```
Mind that I have already set the appropriate capability to the obfs4proxy binary:
```
getcap /usr/bin/obfs4proxy
/usr/bin/obfs4proxy = cap_net_bind_service+ep
```
I took some time moving things around and I think the problem resides on the systemd service unit: https://gitweb.torproject.org/tor.git/tree/contrib/dist/tor.service.in and specifically the option introduced in b4170421cc58d8c57254f4224ba259e817f48869 .
```
NoNewPrivileges=yes
```
I assume so because flipping 'NoNewPrivileges=no' results in obfs4proxy binding to 443. Also, 'PR_SET_NO_NEW_PRIVS' section in 'man 2 prctl' implies so:
```
PR_SET_NO_NEW_PRIVS (since Linux 3.5)
Set the calling process's no_new_privs bit to the value in arg2. With
no_new_privs set to 1, execve(2) promises not to grant privileges to do
anything that could not have been done without the execve(2) call (for
example, rendering the set-user-ID and set-group-ID permission bits, and
file capabilities non-functional). Once set, this bit cannot be unset.
The setting of this bit is inherited by children created by fork(2) and
clone(2), and preserved across execve(2).
For more information, see the kernel source file Documenta‐
tion/prctl/no_new_privs.txt.
```
I understand that 'NoNewPrivileges=no' is a system security drawback but I also consider a regression to not be able to bind obfs4proxy to ports <1024. Could we find a middle ground?
If that helps, I'm running:
```
tor: 0.2.7.6-1~d80.jessie+1
deb.torproject.org/torproject.org/ jessie/main amd64 Packages
obfs4proxy: 0.0.4-1~tpo1
deb.torproject.org/torproject.org/ obfs4proxy/main amd64 Packages
```
Also:
```
cat /etc/debian_version
8.3
cat /proc/version
Linux version 3.16.0-4-amd64 (debian-kernel@lists.debian.org) (gcc version 4.8.4 (Debian 4.8.4-1) ) #1 SMP Debian 3.16.7-ckt20-1+deb8u3 (2016-01-17)
```
Thanks for your work.
**Trac**:
**Username**: irregulatorTor: unspecifiedhttps://gitlab.torproject.org/tpo/core/tor/-/issues/29427kist: Poor performance with a small amount of sockets2023-06-20T17:38:17ZDavid Gouletdgoulet@torproject.orgkist: Poor performance with a small amount of socketsWe just recently found that KIST is performing very poorly if tor has very little amount of sockets.
#### How KIST operates
KIST is scheduled if some cells are put on a circuit queue. A scheduler run might not handle all cells because ...We just recently found that KIST is performing very poorly if tor has very little amount of sockets.
#### How KIST operates
KIST is scheduled if some cells are put on a circuit queue. A scheduler run might not handle all cells because it depends on the available space in the TCP buffer for the socket. What KIST does at the moment is reschedule itself in 10ms (static value).
The problem here is that if there are very few sockets (like most tor clients), then KIST will be able to handle one socket very fast, let say in 1ms, and then it will sleep for another 9ms until KIST is rescheduled.
That 9ms waiting time means that tor is not pushing bytes on the wire even though it could during that time. See the attached graph made by pastly, you can see how much KIST badly under performs with the current 10ms.
#### Consequences
(Might be more, don't treat this as an exhaustive list)
1. Clients are basically capped in bandwidth because they in general only talk to the Guard on a single socket.
2. A new relay joining the network won't have any connections so when the authority measures it, or our bw. scanners, they will only be able to measure a capped value compared to what the relay could actually do (if higher). This measurement will recover after a while once the relay starts seeing traffic and the number of sockets ramps up.
#### Solution
As you can see on the attached graph, bringing the scheduler interval time down to 2ms gives us better performance than Vanilla. That could be a short term solution.
A better solution, a bit more medium-term, would be to make that scheduling interval dynamic depending on how fast tor thinks the TCP buffer on the socket will get emptied. That depends on the connection throughput basically. For example, a 100mbit NIC towards a Guard might only push through 10mbit so we would need a way for tor to learn that per-connection which would allow KIST to estimate when it needs to be rescheduled for that connection.Sponsor 61 - Making the Tor network faster & more reliable for users in Internet-repressive placeshttps://gitlab.torproject.org/tpo/core/tor/-/issues/40741liblzma enum values added in 5.3.1 and 5.3.2 cause compile warnings2023-05-31T21:53:36ZMicah Elizabeth Scottliblzma enum values added in 5.3.1 and 5.3.2 cause compile warnings### Summary
This is just a small janitorial task I'd like to fix in order to get clean builds (with --enable-fatal-warnings) completing on Debian unstable, which includes liblzma 5.4.1. Both of the alpha releases 5.3.1 and 5.3.2 added n...### Summary
This is just a small janitorial task I'd like to fix in order to get clean builds (with --enable-fatal-warnings) completing on Debian unstable, which includes liblzma 5.4.1. Both of the alpha releases 5.3.1 and 5.3.2 added new enum values.
### Steps to reproduce:
1. ./configure --enable-fatal-warnings
2. make
### What is the current bug behavior?
On Debian unstable, with --enable-fatal-warnings, the build fails. (logs below)
### What is the expected behavior?
Clean build.
### Environment
Git main branch, Debian unstable.
### Relevant logs and/or screenshots
```
src/lib/compress/compress_lzma.c: In function ‘lzma_error_str’:
src/lib/compress/compress_lzma.c:51:3: error: enumeration value ‘LZMA_SEEK_NEEDED’ not handled in switch [-Werror=switch-enum]
51 | switch (error) {
| ^~~~~~
src/lib/compress/compress_lzma.c:51:3: error: enumeration value ‘LZMA_RET_INTERNAL1’ not handled in switch [-Werror=switch-enum]
src/lib/compress/compress_lzma.c:51:3: error: enumeration value ‘LZMA_RET_INTERNAL2’ not handled in switch [-Werror=switch-enum]
src/lib/compress/compress_lzma.c:51:3: error: enumeration value ‘LZMA_RET_INTERNAL3’ not handled in switch [-Werror=switch-enum]
src/lib/compress/compress_lzma.c:51:3: error: enumeration value ‘LZMA_RET_INTERNAL4’ not handled in switch [-Werror=switch-enum]
src/lib/compress/compress_lzma.c:51:3: error: enumeration value ‘LZMA_RET_INTERNAL5’ not handled in switch [-Werror=switch-enum]
src/lib/compress/compress_lzma.c:51:3: error: enumeration value ‘LZMA_RET_INTERNAL6’ not handled in switch [-Werror=switch-enum]
src/lib/compress/compress_lzma.c:51:3: error: enumeration value ‘LZMA_RET_INTERNAL7’ not handled in switch [-Werror=switch-enum]
src/lib/compress/compress_lzma.c:51:3: error: enumeration value ‘LZMA_RET_INTERNAL8’ not handled in switch [-Werror=switch-enum]
src/lib/compress/compress_lzma.c: In function ‘tor_lzma_compress_process’:
src/lib/compress/compress_lzma.c:282:3: error: enumeration value ‘LZMA_SEEK_NEEDED’ not handled in switch [-Werror=switch-enum]
282 | switch (retval) {
| ^~~~~~
src/lib/compress/compress_lzma.c:282:3: error: enumeration value ‘LZMA_RET_INTERNAL1’ not handled in switch [-Werror=switch-enum]
src/lib/compress/compress_lzma.c:282:3: error: enumeration value ‘LZMA_RET_INTERNAL2’ not handled in switch [-Werror=switch-enum]
src/lib/compress/compress_lzma.c:282:3: error: enumeration value ‘LZMA_RET_INTERNAL3’ not handled in switch [-Werror=switch-enum]
src/lib/compress/compress_lzma.c:282:3: error: enumeration value ‘LZMA_RET_INTERNAL4’ not handled in switch [-Werror=switch-enum]
src/lib/compress/compress_lzma.c:282:3: error: enumeration value ‘LZMA_RET_INTERNAL5’ not handled in switch [-Werror=switch-enum]
src/lib/compress/compress_lzma.c:282:3: error: enumeration value ‘LZMA_RET_INTERNAL6’ not handled in switch [-Werror=switch-enum]
src/lib/compress/compress_lzma.c:282:3: error: enumeration value ‘LZMA_RET_INTERNAL7’ not handled in switch [-Werror=switch-enum]
src/lib/compress/compress_lzma.c:282:3: error: enumeration value ‘LZMA_RET_INTERNAL8’ not handled in switch [-Werror=switch-enum]
```
### Possible fixes
The new enums seem quite unlikely to matter for Tor, but the existing comments (compress_lzma.c:298) do mention specifically that we are trying to keep the switch-enum warning in place so that we have an exhaustive list of the return codes that were defined at compile-time.
It may be worth reconsidering this approach and going with `-Wno-error=switch-enum` here, but we can safely add the new enums under LZMA_VERSION guards at the cost of just a little more verbosity in this code. That's the approach I would implement unless others have objections to the clutter.Micah Elizabeth ScottMicah Elizabeth Scott