Tor issueshttps://gitlab.torproject.org/tpo/core/tor/-/issues2024-03-26T09:04:49Zhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40924tor_bug_reached counter does not increase as expected2024-03-26T09:04:49Zapplied_privacytor_bug_reached counter does not increase as expected### Summary
When we see this in the log file we would assume the tor_bug_reached metric is increased, but it does not:
```
conflux_validate_legs(): Bug: Number of legs is above maximum of 2 allowed: 3#012 (on Tor 0.4.9.0-alpha-dev )
``...### Summary
When we see this in the log file we would assume the tor_bug_reached metric is increased, but it does not:
```
conflux_validate_legs(): Bug: Number of legs is above maximum of 2 allowed: 3#012 (on Tor 0.4.9.0-alpha-dev )
```
### What is the current bug behavior?
tor_bug_reached counter does not increase in this example
### What is the expected behavior?
tor_bug_reached counter should increase.
implemented in
#40839
### Environment
```
tor --version
Tor version 0.4.9.0-alpha-dev.
This build of Tor is covered by the GNU General Public License (https://www.gnu.org/licenses/gpl-3.0.en.html)
Tor is running on Linux with Libevent 2.1.12-stable, OpenSSL 3.0.11, Zlib 1.2.13, Liblzma 5.4.1, Libzstd 1.5.4 and Glibc 2.36 as libc.
Tor compiled with GCC version 12.2.0
```
OS: Debian 12
installation method: deb.torproject.org
package version: 0.4.9.0-alpha-dev-20240325T020413Z-1~d12.bookworm+1https://gitlab.torproject.org/tpo/core/tor/-/issues/40921tor crash: conflux_note_cell_sent2024-03-07T00:09:46Zapplied_privacytor crash: conflux_note_cell_sent### Summary
tor exit relay crashes
### What is the current bug behavior?
tor crashes
### What is the expected behavior?
should not crash
### Environment
- version: 0.4.8.10-1~d12.bookworm+1
- Debian 12
- deb.torproject.org
### ...### Summary
tor exit relay crashes
### What is the current bug behavior?
tor crashes
### What is the expected behavior?
should not crash
### Environment
- version: 0.4.8.10-1~d12.bookworm+1
- Debian 12
- deb.torproject.org
### Relevant logs and/or screenshots
```
Bug: Tor 0.4.8.10: Assertion leg failed in conflux_note_cell_sent at ../src/core/or/conflux.c:534: . Stack trace: (on Tor 0.4.8.10 )
tor_assertion_failed_(): Bug: ../src/core/or/conflux.c:534: conflux_note_cell_sent: Assertion leg failed; aborting. (on Tor 0.4.8.10 )
Bug: /usr/bin/tor(log_backtrace_impl+0x57) [0x555806726ea7] (on Tor 0.4.8.10 )
Bug: /usr/bin/tor(tor_assertion_failed_+0x147) [0x555806731c57] (on Tor 0.4.8.10 )
tor[2919]: ============================================================ T= 1709661994
tor[2919]: Tor 0.4.8.10 died: Caught signal 11
tor[2919]: /usr/bin/tor(+0xead6a)[0x555806726d6a]
Bug: /usr/bin/tor(conflux_note_cell_sent+0xf4) [0x5558067c3374] (on Tor 0.4.8.10 )
tor[2919]: /lib/x86_64-linux-gnu/libc.so.6(abort+0x170)[0x7fa77056950f]
tor[2919]: /lib/x86_64-linux-gnu/libc.so.6(abort+0x170)[0x7fa77056950f]
tor[2919]: /usr/bin/tor(+0xeb5de)[0x5558067275de]
tor[2919]: /usr/bin/tor(+0xf5eee)[0x555806731eee]
Bug: /usr/bin/tor(relay_send_command_from_edge_+0x1ad) [0x5558066d358d] (on Tor 0.4.8.10 )
tor[2919]: /usr/bin/tor(conflux_note_cell_sent+0xf9)[0x5558067c3379]
tor[2919]: /usr/bin/tor(relay_send_command_from_edge_+0x1ad)[0x5558066d358d]
tor[2919]: /usr/bin/tor(connection_edge_send_command+0x73)[0x5558066d3a63]
tor[2919]: /usr/bin/tor(connection_edge_finished_connecting+0xa8)[0x5558067e1328]
Bug: /usr/bin/tor(connection_edge_send_command+0x73) [0x5558066d3a63] (on Tor 0.4.8.10 )
tor[2919]: /usr/bin/tor(+0x19dff8)[0x5558067d9ff8]
tor[2919]: /usr/bin/tor(connection_handle_write+0x38)[0x5558067da458]
tor[2919]: /usr/bin/tor(+0x70123)[0x5558066ac123]
tor[2919]: /lib/x86_64-linux-gnu/libevent-2.1.so.7(+0x21585)[0x7fa770e5c585]
tor[2919]: /lib/x86_64-linux-gnu/libevent-2.1.so.7(event_base_loop+0x49f)[0x7fa770e5cc1f]
tor[2919]: /usr/bin/tor(do_main_loop+0xf1)[0x5558066ad671]
tor[2919]: /usr/bin/tor(tor_run_main+0x1e5)[0x5558066a8fa5]
Bug: /usr/bin/tor(connection_edge_finished_connecting+0xa8) [0x5558067e1328] (on Tor 0.4.8.10 )
tor[2919]: /usr/bin/tor(tor_main+0x59)[0x5558066a5329]
tor[2919]: /usr/bin/tor(main+0x19)[0x5558066a4ee9]
tor[2919]: /lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x7fa77056a24a]
tor[2919]: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x7fa77056a305]
tor[2919]: /usr/bin/tor(_start+0x21)[0x5558066a4f31]
Bug: /usr/bin/tor(+0x19dff8) [0x5558067d9ff8] (on Tor 0.4.8.10 )
Bug: /usr/bin/tor(connection_handle_write+0x38) [0x5558067da458] (on Tor 0.4.8.10 )
Bug: /usr/bin/tor(+0x70123) [0x5558066ac123] (on Tor 0.4.8.10 )
Bug: /lib/x86_64-linux-gnu/libevent-2.1.so.7(+0x21585) [0x7fa770e5c585] (on Tor 0.4.8.10 )
Bug: /lib/x86_64-linux-gnu/libevent-2.1.so.7(event_base_loop+0x49f) [0x7fa770e5cc1f] (on Tor 0.4.8.10 )
Bug: /usr/bin/tor(do_main_loop+0xf1) [0x5558066ad671] (on Tor 0.4.8.10 )
Bug: /usr/bin/tor(tor_run_main+0x1e5) [0x5558066a8fa5] (on Tor 0.4.8.10 )
Bug: /usr/bin/tor(tor_main+0x59) [0x5558066a5329] (on Tor 0.4.8.10 )
Bug: /usr/bin/tor(main+0x19) [0x5558066a4ee9] (on Tor 0.4.8.10 )
Bug: /lib/x86_64-linux-gnu/libc.so.6(+0x2724a) [0x7fa77056a24a] (on Tor 0.4.8.10 )
Bug: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85) [0x7fa77056a305] (on Tor 0.4.8.10 )
Bug: /usr/bin/tor(_start+0x21) [0x5558066a4f31] (on Tor 0.4.8.10 )
systemd[1]: tor: Main process exited, code=exited, status=127/n/a
```Tor: 0.4.8.x-post-stableMike PerryMike Perryhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40920Unprepared for proof-of-work extensibility2024-03-05T19:16:34ZMicah Elizabeth ScottUnprepared for proof-of-work extensibilityAt the time the C-tor implementation of onion service proof of work came about, we didn't have a clear specification for how multiple potential PoW schemes would interact.
We're starting to specify that behavior more carefully, and it's...At the time the C-tor implementation of onion service proof of work came about, we didn't have a clear specification for how multiple potential PoW schemes would interact.
We're starting to specify that behavior more carefully, and it's now clear that the C-tor implementation will be insufficient no matter what.
C-tor currently requires that "pow-params" only appears at most once, that it always has 4 arguments, and that it can't contain a binary object.
We do expect that multiple pow-params lines will be necessary for servers that support multiple algorithms simultaneously. Those new schemes will likely need a different number of parameters, and we may want to allow binary objects if those parameter sets need to be too large for one line.
If we plan to deploy new PoW schemes during the remaining lifetime of C-tor, it needs to be updated to successfully ignore pow-params lines with schemes it doesn't recognize.
Related:
- https://spec.torproject.org/rend-spec/hsdesc-encrypt.html?highlight=pow-params#second-layer-plaintext
- https://gitlab.torproject.org/tpo/core/torspec/-/merge_requests/254
- https://gitlab.torproject.org/tpo/core/torspec/-/issues/256Micah Elizabeth ScottMicah Elizabeth Scotthttps://gitlab.torproject.org/tpo/core/tor/-/issues/40919consider not counting hs directory bw as directory bytes written2024-03-05T09:40:42Ztrinity-1686aconsider not counting hs directory bw as directory bytes writtenbbb chat log:
```
arma: the dirbyte asymmetry can happen with many fresh tor clients bootstrapping over and over
trinity-1686a: fyi, i'm trying to add metrics to know if the asymmetry is many bootstrap, or fetching hs descriptor, or a bi...bbb chat log:
```
arma: the dirbyte asymmetry can happen with many fresh tor clients bootstrapping over and over
trinity-1686a: fyi, i'm trying to add metrics to know if the asymmetry is many bootstrap, or fetching hs descriptor, or a bit of both
arma: trinity: i wonder if relays should include onion service descriptor writes in bwhist_note_dir_bytes_written(). that seems like maybe it should be private.
arma: (it looks from first glance like yes-they-do-include-it currently)
trinity-1686a: arma: it would be kinda annoying to ignore hs for the read count (from relay pov) because we don't know yet what it is when we do that part of the accounting
arma: but, for a relay that isn't a guard, it will serve very little *other* dir info besides hsdir info? maybe? that seems scary
```
cc @armahttps://gitlab.torproject.org/tpo/core/tor/-/issues/40916Tor socks and requests to IPs "Your application (using socks4 to port X) is g...2024-02-24T23:00:43Zagowa338Tor socks and requests to IPs "Your application (using socks4 to port X) is giving Tor only an IP address."Hi,
~~was there a recent change that tor now drops connections to IPs instead of just logging a warning and passing them?~~ (Edit: No, I just added `SafeSocks 1` and forgot to restart tor for a while, mostly docs issue now. See below)
...Hi,
~~was there a recent change that tor now drops connections to IPs instead of just logging a warning and passing them?~~ (Edit: No, I just added `SafeSocks 1` and forgot to restart tor for a while, mostly docs issue now. See below)
In my logs I see countless lines like this:
> [warn] Your application (using socks4 to port X) is giving Tor only an IP address. Applications that do DNS resolves themselves may leak information. Consider using Socks4A (e.g. via privoxy or socat) instead. For more information, please see https://2019.www.torproject.org/docs/faq.html.en#WarningsAboutSOCKSandDNSInformationLeaks. **Rejecting**. [11 similar message(s) suppressed in last 60 seconds]
When I look at the documentation at the provided link it doesn't say anything about "Rejecting" like the above log message itself does. Instead that page **only** talks about this being a warning.
> Tor ships with a program called tor-resolve that can use the Tor network to look up hostnames remotely; if you resolve hostnames to IPs with tor-resolve, then pass the IPs to your applications, you'll be fine. (Tor will still give the warning, but now you know what it means.)
config:
```
AvoidDiskWrites 1
FetchHidServDescriptors 1
FetchServerDescriptors 1
FetchUselessDescriptors 1
HardwareAccel 1
SafeLogging 1
Sandbox 1
SafeSocks 1
DormantCanceledByStartup 1
DormantClientTimeout 30 days
SOCKSPort 9050 IPv6Traffic ExtendedErrors
TransPort 8080 IPv6Traffic
Log notice file /var/log/tor/notices.log
DataDirectory /var/lib/tor
ORPort 9001
DirPort 9030
ExitRelay 0
IPv6Exit 0
ReducedExitPolicy 0
ExitPolicy reject *:* # no exits allowed
BridgeRelay 1
```
~~Is there a new flag to add to that SOCKSPort line to have tor just accept such connections anyway?~~ My issue is that "the application" tries to reach an explicit IP (I.E. it doesn't have a dns name to begin with)
EDIT: I found my issue. It was `SafeSocks 1` 🤦
But I'm leaving this open as a request to add this to the referenced faq page as well.
EDIT2: Could we either move SafeSocks into the flags of SOCKSPort or add an overwrite using a flag? E.g. to be able to have one with and one without `SafeSocks` (same for `WarnUnsafeSocks` and `TestSocks`)?https://gitlab.torproject.org/tpo/core/tor/-/issues/40915Documentation, improve directory authority server section in man pages2024-02-23T00:26:40Zagowa338Documentation, improve directory authority server section in man pages### Summary
The current documentation given for directory authority server and related options is a bit confusing and doesn't provide any information about whether or not users can "help the public tor network" by running their own dire...### Summary
The current documentation given for directory authority server and related options is a bit confusing and doesn't provide any information about whether or not users can "help the public tor network" by running their own directory server. Also other sources (e.g. search results, including mailing list and blog posts) are not really helpful about this, as there have been some changes and it is not clear if the provided information is still correct. Especially because some sources claim that it isn't used at all in newer tor versions (which is probably either over simplified, a misunderstanding or just wrong). Also the current wording of the explanations for the config options is open for interpretation at some important points and probably needs a minor improvement.
Especially to newer users this situation can be somewhat confusing.
Initially I wanted to directly make a MR with an improved version like this:
<blockquote><pre>
DIRECTORY AUTHORITY SERVER OPTIONS
The following options enable operation as a directory authority, and
control how Tor behaves as a directory authority. You <strong><del>should not need</del> <ins>only need</ins></strong>
to adjust any of them if <strong><del>you’re running a</del> <ins>you want to have a separate independent<br/>private "fork" of the Tor network (e.g. for a local lab environment) or<br/>if you're one of the trusted operators of one of the hard coded directory authority<br/>servers. These options are NOT needed for running a</ins></strong> regular relay or exit server
on the public Tor network.
AuthoritativeDirectory 0|1
When this option is set to 1, Tor operates as an authoritative
directory server. Instead of caching the directory, it generates
its own list of good servers, signs it, and sends that to the
clients. <strong><del>Unless the clients already have you listed as a trusted
directory, you probably do not want to set this option.</del> <ins>See DirAuthority
for information about how to configure a client to use and trust
this directory server.</ins></strong>
BridgeAuthoritativeDir 0|1
When this option is set in addition to AuthoritativeDirectory, Tor
accepts and serves server descriptors, but it caches and serves the
main networkstatus documents rather than generating its own.
(Default: 0)
</pre></blockquote>
But then I got stuck at the ambiguously for the use and purpose of "BridgeAuthoritativeDir" myself.
Does this mean:
a) when `AuthoritativeDirectory` and `BridgeAuthoritativeDir` are both enabled a node indeed would "helps the public Tor network" by being part of a kind of "decentralized CDN" serving a copy of the data from the hard coded directory authority servers? If yes, how does e.g. the node discovery work? I.E. how do clients get to know about these? And is this something like Kademlia-DHT, peerlist-exchange with torrents, or a form of rsync/replication? (possible use case to reduce the load on the authoritative directory servers and improve response time by having a big distributed swarm of nodes to choose from)
b) It provides a private/internal authoritative directory, but instead of starting with an empty directory it'll start with a clone of the public one. But won't be known to anything on the public network and needs to be explicitly configured using DirAuthority in the client side configuration. (possible use case could be some service provider style hidden service deployment or something)
c) Something else entirely?
A would make sense. But having it disabled by default and no mention within the relay operators guide kinda speaks against it. It could be B, but it would be an odd design choice compared to using A.
But what still doesn't add up is some sources are talking about all directory authority servers being selected and governed by a distributed decentralized consensus based system. But I simultaneously other sources say that the authoritative directory servers for the public Tor network are hard coded, so...
Also the man page for the `FallbackDir` option implies that Tor has some kind of discovery algorithm for such cache nodes. But then why wouldn't it be mentioned in the relay operators guide at all?
Tl;Dr: I still don't quite understand this config item. Could someone please help to reword it to something less confusing?https://gitlab.torproject.org/tpo/core/tor/-/issues/40913default clients, including tor browser, fail to bootstrap on an ipv6-only net...2024-02-19T17:38:27ZRoger Dingledinedefault clients, including tor browser, fail to bootstrap on an ipv6-only networkIf you run your Tor Browser on a network that only has ipv6, it will never bootstrap.
There are ipv6 addresses in the fallback-dirs list, but it seems that Tor never tries them.
Bug reported at CCC by an ISP operator from Ukraine, and ...If you run your Tor Browser on a network that only has ipv6, it will never bootstrap.
There are ipv6 addresses in the fallback-dirs list, but it seems that Tor never tries them.
Bug reported at CCC by an ISP operator from Ukraine, and confirmed today by @trinity-1686a.
trinity says that if you set ClientPreferIPv6ORPort 1, then things do work. But I have vague memories of us setting prefer-ipv6 by default in Tor Browser and regretting it and turning it off again -- perhaps because it never tries ipv4 in some circumstances that happen in practice.https://gitlab.torproject.org/tpo/core/tor/-/issues/40912some tests fail on linux/aarch322024-02-29T19:31:22Zdasj19some tests fail on linux/aarch32Hello I tried building on arm 32-bit and I can successfully build tor on NixOS when skipping the tests.
However with the tests enabled the build fails.
This is the output from build logs:
```
` # sandbox/is_active: [forking] Feb 06 17:...Hello I tried building on arm 32-bit and I can successfully build tor on NixOS when skipping the tests.
However with the tests enabled the build fails.
This is the output from build logs:
```
` # sandbox/is_active: [forking] Feb 06 17:43:14.224 [err] install_syscall_filter(): Bug: (Sandbox) failed to load: -125 (Operation canceled)! Are you sure that your kernel has seccomp2 support? The sandbox won't work without it. (on Tor 0.4.8.10 )
# FAIL src/test/test_sandbox.c:146: assert(sandbox_is_active())
# [is_active FAILED]
# sandbox/open_filename: [forking] Feb 06 17:43:14.279 [err] install_syscall_filter(): Bug: (Sandbox) failed to load: -125 (Operation canceled)! Are you sure that your kernel has seccomp2 support? The sandbox won't work without it. (on Tor 0.4.8.10 )
# FAIL src/test/test_sandbox.c:170: assert(fd OP_EQ -1): 9 vs -1
# [open_filename FAILED]
# sandbox/opendir_dirname: [forking] Feb 06 17:43:14.343 [err] install_syscall_filter(): Bug: (Sandbox) failed to load: -125 (Operation canceled)! Are you sure that your kernel has seccomp2 support? The sandbox won't work without it. (on Tor 0.4.8.10 )
# FAIL src/test/test_sandbox.c:271: assert(dir OP_EQ NULL): 0xdf8300 vs (nil)
# [opendir_dirname FAILED]
# sandbox/openat_filename: [forking] Feb 06 17:43:14.400 [err] install_syscall_filter(): Bug: (Sandbox) failed to load: -125 (Operation canceled)! Are you sure that your kernel has seccomp2 support? The sandbox won't work without it. (on Tor 0.4.8.10 )
# FAIL src/test/test_sandbox.c:249: assert(fd OP_EQ -1): 9 vs -1
# [openat_filename FAILED]
# sandbox/chmod_filename: [forking] Feb 06 17:43:14.493 [err] install_syscall_filter(): Bug: (Sandbox) failed to load: -125 (Operation canceled)! Are you sure that your kernel has seccomp2 support? The sandbox won't work without it. (on Tor 0.4.8.10 )
# FAIL src/test/test_sandbox.c:190: assert(rc OP_EQ -1): 0 vs -1
# [chmod_filename FAILED]
# sandbox/chown_filename: [forking] Feb 06 17:43:14.561 [err] install_syscall_filter(): Bug: (Sandbox) failed to load: -125 (Operation canceled)! Are you sure that your kernel has seccomp2 support? The sandbox won't work without it. (on Tor 0.4.8.10 )
# FAIL src/test/test_sandbox.c:208: assert(rc OP_EQ -1): 0 vs -1
# [chown_filename FAILED]
# sandbox/rename_filename: [forking] Feb 06 17:43:14.629 [err] install_syscall_filter(): Bug: (Sandbox) failed to load: -125 (Operation canceled)! Are you sure that your kernel has seccomp2 support? The sandbox won't work without it. (on Tor 0.4.8.10 )
# FAIL src/test/test_sandbox.c:228: assert(rc OP_EQ -1): 0 vs -1
# [rename_filename FAILED]`
```
The NixOS build script can be found here: https://github.com/NixOS/nixpkgs/blob/master/pkgs/tools/security/tor/default.nix
I already submitted a PR which disables the tests for linux/aarch32 (https://github.com/NixOS/nixpkgs/pull/286792/files) but ideally would be to have the fix upstream.https://gitlab.torproject.org/tpo/core/tor/-/issues/40909Relay asymmetry between tx and rx in presence of high amount of circuits2024-01-22T09:15:38ZFelixRelay asymmetry between tx and rx in presence of high amount of circuitsHi
I observed an asymmetry between tx and rx on `Doedelkiste08 E388F7BD196F5195AEF114552585152EA6942329`.
- normal is 20-30 GB/hour in each direction but this time it is 101 GB/hour on tx and 13 GB/hour rx.
- the whole thing was framed...Hi
I observed an asymmetry between tx and rx on `Doedelkiste08 E388F7BD196F5195AEF114552585152EA6942329`.
- normal is 20-30 GB/hour in each direction but this time it is 101 GB/hour on tx and 13 GB/hour rx.
- the whole thing was framed by some 100k open circuits.
- one bug was captured.
- during that effect the memory went from normal 3.3 GB to 10 GB until restart of the Tor daemon
```plaintext
Tor 0.4.8.9 running on FreeBSD with Libevent 2.1.12-stable, OpenSSL LibreSSL 3.7.3, Zlib 1.2.13, Liblzma 5.4.1, Libzstd 1.5.5 and BSD 1302001 as libc.
```
https://doedelkiste.de/1068942dede50577fb80c36fcd7240146042e6b7f4d41860d9a08efaf7a0ae68/2024-01-22_metrics_Doedelkiste08.jpg
```plaintext
Special columns
G GB Bytestx / hour
H GB Bytesrx / hour
I ConnIp4rx / hour
J ConnIp6rx / hour
K ConnIp4tx / hour
L ConnIp6tx / hour
Day 2024-01-10
YYYY MM DD hh mm circs G H I J K L
----------------------------------------------
2024 01 10 00 55 19396 21 21 5523 311 1265 340
2024 01 10 01 55 52167 24 22 11822 494 1874 417
2024 01 10 02 55 86601 22 22 9141 667 2965 640
2024 01 10 03 55 87786 21 20 9013 621 3194 745
2024 01 10 04 55 475880 24 23 14957 2151 3463 789
2024 01 10 05 55 602250 63 23 17184 2961 3082 675
2024 01 10 06 55 683559 101 13 4192 372 3327 554 <--- 101 GBtx 13 GBrx
2024 01 10 07 55 584892 101 13 8604 435 3004 573 <--- 101 GBtx 13 GBrx
2024 01 10 08 55 512506 101 12 10954 716 3532 584 <--- 101 GBtx 12 GBrx
2024 01 10 09 55 515670 101 13 11107 703 2831 423 <--- 101 GBtx 13 GBrx
2024 01 10 10 55 509373 101 12 12487 693 2580 456 <--- 101 GBtx 12 GBrx
2024 01 10 11 55 433452 101 12 11618 778 2614 509 <--- 101 GBtx 12 GBrx
2024 01 10 12 55 459210 101 12 11142 690 2683 468 <--- 101 GBtx 12 GBrx
2024 01 10 13 55 445861 101 13 12718 1123 2196 341 <--- 101 GBtx 13 GBrx
2024 01 10 14 55 129114 64 23 15763 2258 2370 414
2024 01 10 15 55 107969 28 26 9466 971 2480 460
2024 01 10 16 55 117238 23 22 10619 903 3253 580
2024 01 10 17 55 119758 26 25 11640 955 3726 673
2024 01 10 18 55 112535 27 26 10459 864 3241 649
2024 01 10 19 55 111949 24 23 11207 943 3601 671
2024 01 10 20 55 118081 25 24 10740 816 3422 636
2024 01 10 21 55 43387 25 23 12064 638 2587 513
2024 01 10 22 55 45630 26 26 8244 468 1634 304
2024 01 10 23 55 79891 33 31 14421 711 2386 382
```
```plaintext
Jan 10 05:29:27.000 [notice] We're low on memory (cell queues total alloc: 1854500736 buffer total alloc: 145281024, tor compress total alloc: 0 (zlib: 0, zstd: 0, lzma: 0), rendezvous cache total alloc: 147652723). Killing circuits withover-long queues. (This behavior is controlled by MaxMemInQueues.)
Jan 10 05:29:27.000 [notice] Removed 216594576 bytes by killing 16262 circuits; 719620 circuits remain alive. Also killed 0 non-linked directory connections. Killed 0 edge connections
Jan 10 05:29:37.000 [notice] We're low on memory (cell queues total alloc: 1817612016 buffer total alloc: 188672000, tor compress total alloc: 0 (zlib: 0, zstd: 0, lzma: 0), rendezvous cache total alloc: 147706369). Killing circuits withover-long queues. (This behavior is controlled by MaxMemInQueues.)
Jan 10 05:29:37.000 [notice] Removed 223153920 bytes by killing 12759 circuits; 707641 circuits remain alive. Also killed 0 non-linked directory connections. Killed 0 edge connections
Jan 10 05:29:50.000 [notice] We're low on memory (cell queues total alloc: 1843384752 buffer total alloc: 155586560, tor compress total alloc: 0 (zlib: 0, zstd: 0, lzma: 0), rendezvous cache total alloc: 147736526). Killing circuits withover-long queues. (This behavior is controlled by MaxMemInQueues.)
Jan 10 05:29:50.000 [notice] Removed 215864352 bytes by killing 13793 circuits; 693209 circuits remain alive. Also killed 0 non-linked directory connections. Killed 0 edge connections
...
Jan 10 07:01:43.000 [notice] We're low on memory (cell queues total alloc: 1848156288 buffer total alloc: 138866688, tor compress total alloc: 9630543 (zlib: 43264, zstd: 9587151, lzma: 0), rendezvous cache total alloc: 151793907). Killing circuits withover-long queues. (This behavior is controlled by MaxMemInQueues.)
Jan 10 07:01:43.000 [notice] Removed 217691232 bytes by killing 17176 circuits; 671565 circuits remain alive. Also killed 0 non-linked directory connections. Killed 0 edge connections
Jan 10 07:02:00.000 [notice] We're low on memory (cell queues total alloc: 1877210544 buffer total alloc: 107673600, tor compress total alloc: 9760383 (zlib: 173056, zstd: 9587151, lzma: 0), rendezvous cache total alloc: 151770912). Killing circuits withover-long queues. (This behavior is controlled by MaxMemInQueues.)
Jan 10 07:02:00.000 [notice] Removed 215662800 bytes by killing 17873 circuits; 664505 circuits remain alive. Also killed 0 non-linked directory connections. Killed 1 edge connections
Jan 10 07:02:00.000 [warn] connection_edge_about_to_close: Bug: (Harmless.) Edge connection (marked at src/core/or/circuitlist.c:2747) hasn't sent end yet? (on Tor 0.4.8.9 )
Jan 10 07:02:00.000 [warn] tor_bug_occurred_: Bug: src/core/or/connection_edge.c:1086: connection_edge_about_to_close: This line should not have been reached. (Future instances of this warning will be silenced.) (on Tor 0.4.8.9 )
Jan 10 07:02:00.000 [warn] Bug: Tor 0.4.8.9: Line unexpectedly reached at connection_edge_about_to_close at src/core/or/connection_edge.c:1086. Stack trace: (on Tor 0.4.8.9 )
Jan 10 07:02:00.000 [warn] Bug: 0x281cee7eebfc <log_backtrace_impl+0x5c> at /usr/local/bin/tor (on Tor 0.4.8.9 )
Jan 10 07:02:00.000 [warn] Bug: 0x281cee7fd024 <tor_bug_occurred_+0x1c4> at /usr/local/bin/tor (on Tor 0.4.8.9 )
Jan 10 07:02:00.000 [warn] Bug: 0x281cee64d022 <connection_exit_about_to_close+0x82> at /usr/local/bin/tor (on Tor 0.4.8.9 )
Jan 10 07:02:00.000 [warn] Bug: 0x281cee649651 <tor_mainloop_free_all+0x691> at /usr/local/bin/tor (on Tor 0.4.8.9 )
Jan 10 07:02:00.000 [warn] Bug: 0x281cee649218 <tor_mainloop_free_all+0x258> at /usr/local/bin/tor (on Tor 0.4.8.9 )
Jan 10 07:02:00.000 [warn] Bug: 0x2825113c733d <event_base_assert_ok_nolock_+0xbdd> at /usr/local/lib/libevent-2.1.so.7 (on Tor 0.4.8.9 )
Jan 10 07:02:00.000 [warn] Bug: 0x2825113c31ac <event_base_loop+0x54c> at /usr/local/lib/libevent-2.1.so.7 (on Tor 0.4.8.9 )
Jan 10 07:02:00.000 [warn] Bug: 0x281cee648d6a <do_main_loop+0x10a> at /usr/local/bin/tor (on Tor 0.4.8.9 )
Jan 10 07:02:00.000 [warn] Bug: 0x281cee635ae8 <tor_run_main+0x128> at /usr/local/bin/tor (on Tor 0.4.8.9 )
Jan 10 07:02:00.000 [warn] Bug: 0x281cee634464 <tor_main+0x54> at /usr/local/bin/tor (on Tor 0.4.8.9 )
Jan 10 07:03:21.000 [notice] We're low on memory (cell queues total alloc: 1878572784 buffer total alloc: 115148800, tor compress total alloc: 259680 (zlib: 259584, zstd: 0, lzma: 0), rendezvous cache total alloc: 151802305). Killing circuits withover-long queues. (This behavior is controlled by MaxMemInQueues.)
Jan 10 07:03:22.000 [notice] Removed 215030640 bytes by killing 19693 circuits; 688440 circuits remain alive. Also killed 0 non-linked directory connections. Killed 0 edge connections
Jan 10 07:03:36.000 [notice] We're low on memory (cell queues total alloc: 1882048608 buffer total alloc: 111587328, tor compress total alloc: 259680 (zlib: 259584, zstd: 0, lzma: 0), rendezvous cache total alloc: 151820909). Killing circuits withover-long queues. (This behavior is controlled by MaxMemInQueues.)
Jan 10 07:03:36.000 [notice] Removed 214973616 bytes by killing 14621 circuits; 684012 circuits remain alive. Also killed 0 non-linked directory connections. Killed 0 edge connections
Jan 10 07:03:57.000 [notice] We're low on memory (cell queues total alloc: 1883993760 buffer total alloc: 105201664, tor compress total alloc: 4455067 (zlib: 346112, zstd: 4108779, lzma: 0), rendezvous cache total alloc: 151853380). Killing circuits withover-long queues. (This behavior is controlled by MaxMemInQueues.)
...
Jan 11 18:56:46.000 [notice] We're low on memory (cell queues total alloc: 1654239312 buffer total alloc: 426717184, tor compress total alloc: 0 (zlib: 0, zstd: 0, lzma: 0), rendezvous cache total alloc: 65574750). Killing circuits withover-long queues. (This behavior is controlled by MaxMemInQueues.)
Jan 11 18:56:46.000 [notice] Removed 214757664 bytes by killing 13902 circuits; 854250 circuits remain alive. Also killed 0 non-linked directory connections. Killed 0 edge connections
Jan 11 18:57:09.000 [notice] We're low on memory (cell queues total alloc: 1807050960 buffer total alloc: 281556992, tor compress total alloc: 0 (zlib: 0, zstd: 0, lzma: 0), rendezvous cache total alloc: 65565646). Killing circuits withover-long queues. (This behavior is controlled by MaxMemInQueues.)
Jan 11 18:57:10.000 [notice] Removed 222395232 bytes by killing 16123 circuits; 830074 circuits remain alive. Also killed 0 non-linked directory connections. Killed 0 edge connections
Jan 11 18:58:55.000 [notice] We're low on memory (cell queues total alloc: 1868207088 buffer total alloc: 297957376, tor compress total alloc: 0 (zlib: 0, zstd: 0, lzma: 0), rendezvous cache total alloc: 65590037). Killing circuits withover-long queues. (This behavior is controlled by MaxMemInQueues.)
Jan 11 18:58:55.000 [notice] Removed 299982672 bytes by killing 24949 circuits; 842791 circuits remain alive. Also killed 0 non-linked directory connections. Killed 0 edge connections
Jan 11 19:44:37.000 [notice] No circuits are opened. Relaxed timeout for circuit 809594 (a Measuring circuit timeout 3-hop circuit in state doing handshakes with channel state open) to 60000ms. However, it appears the circuit has timed out anyway. [3 similar message(s) suppressed in last 3000 seconds]
Jan 11 19:55:47.000 [notice] Heartbeat: Tor's uptime is 41 days 8:00 hours, with 752697 circuits open. I've sent 24792.07 GB and received 23055.17 GB. I've received 8142351 connections on IPv4 and 765946 on IPv6. I've made 2448992 connections with IPv4 and 412798 with IPv6.
Jan 11 19:55:47.000 [notice] While bootstrapping, fetched this many bytes: 420 (server descriptor upload); 616425 (consensus network-status fetch); 11896 (authority cert fetch); 6573653 (microdescriptor fetch)
Jan 11 19:55:47.000 [notice] While not bootstrapping, fetched this many bytes: 1173362587 (server descriptor fetch); 25800 (server descriptor upload); 57166841 (consensus network-status fetch); 1773 (authority cert fetch); 22640766 (microdescriptor fetch)
Jan 11 19:55:47.000 [notice] Circuit handshake stats since last time: 8/8 TAP, 6477916/7691976 NTor.
Jan 11 19:55:47.000 [notice] Since startup we initiated 0 and received 0 v1 connections; initiated 0 and received 0 v2 connections; initiated 0 and received 17329 v3 connections; initiated 1 and received 78 v4 connections; initiated 1090033 and received 6158419 v5 connections.
Jan 11 19:55:47.000 [notice] Heartbeat: DoS mitigation since startup: 1062 circuits killed with too many cells, 3739248 circuits rejected, 1240 marked addresses, 7 marked addresses for max queue, 633 same address concurrent connections rejected, 0 connections rejected, 17 single hop clients refused, 20365797 INTRODUCE2 rejected.
Jan 11 20:55:47.000 [notice] Heartbeat: Tor's uptime is 41 days 9:00 hours, with 65342 circuits open. I've sent 24839.98 GB and received 23063.91 GB. I've received 8159587 connections on IPv4 and 768038 on IPv6. I've made 2451655 connections with IPv4 and 413461 with IPv6.
...
Jan 11 20:55:47.000 [notice] Heartbeat: DoS mitigation since startup: 1062 circuits killed with too many cells, 3744298 circuits rejected, 1245 marked addresses, 7 marked addresses for max queue, 633 same address concurrent connections rejected, 75 connections rejected, 17 single hop clients refused, 20365797 INTRODUCE2 rejected.
...
Jan 11 21:55:47.000 [notice] Heartbeat: Tor's uptime is 41 days 10:00 hours, with 42592 circuits open. I've sent 24844.89 GB and received 23067.80 GB. I've received 8167429 connections on IPv4 and 768372 on IPv6. I've made 2454157 connections with IPv4 and 414023 with IPv6.
```https://gitlab.torproject.org/tpo/core/tor/-/issues/40908circuit_get_package_window(): Bug: Conflux has no circuit to send on2024-03-28T06:38:38Zdenkena-consultingtor@denkena-consulting.comcircuit_get_package_window(): Bug: Conflux has no circuit to send on### Summary
At random intervals, the appended is visible in the log. For the second time now, consensus has somehow decided to take every flag from the relay.
### Steps to reproduce:
1. Server running as usual.
2. Bug happens.
### Wh...### Summary
At random intervals, the appended is visible in the log. For the second time now, consensus has somehow decided to take every flag from the relay.
### Steps to reproduce:
1. Server running as usual.
2. Bug happens.
### What is the current bug behavior?
Bug gets into the logs, at random intervals the flags for the relay all get dropped.
### What is the expected behavior?
No bug occurring, relay flags stay without issue.
### Environment
Tor version 0.4.8.10.
This build of Tor is covered by the GNU General Public License (https://www.gnu.org/licenses/gpl-3.0.en.html)
Tor is running on Linux with Libevent 2.1.12-stable, OpenSSL 3.0.12, Zlib 1.3, Liblzma 5.4.5, Libzstd 1.5.5 and Glibc 2.38 as libc.
Tor compiled with GCC version 13.2.1
- OS: Linux 6.1.66-gentoo-x86_64 with latest toolchain
- Installation method: compiled from gentoo tree
### Relevant logs and/or screenshots
```
Jan 19 11:51:07 [Tor] tor_bug_occurred_(): Bug: src/core/or/conflux.c:567: conflux_pick_first_leg: Non-fatal assertion !(smartlist_len(cfx->legs) <= 0) failed. (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: Tor 0.4.8.10: Non-fatal assertion !(smartlist_len(cfx->legs) <= 0) failed in conflux_pick_first_leg at src/core/or/conflux.c:567. Stack trace: (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/bin/tor(log_backtrace_impl+0x5b) [0x556d9b298ecb] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/bin/tor(tor_bug_occurred_+0x16d) [0x556d9b2a3d3d] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/bin/tor(conflux_decide_next_circ+0x40e) [0x556d9b33191e] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/bin/tor(circuit_get_package_window+0x6d) [0x556d9b3375ed] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/bin/tor(+0x8ffdc) [0x556d9b249fdc] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/bin/tor(connection_edge_package_raw_inbuf+0xa1) [0x556d9b24c771] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/bin/tor(connection_edge_process_inbuf+0x6e) [0x556d9b34de9e] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/bin/tor(connection_handle_read+0x650) [0x556d9b346e70] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/bin/tor(+0x6dcce) [0x556d9b227cce] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/lib64/libevent-2.1.so.7(+0x21633) [0x7f4157ecc633] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/lib64/libevent-2.1.so.7(event_base_loop+0x4ff) [0x7f4157ecd18f] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/bin/tor(do_main_loop+0xd1) [0x556d9b228ee1] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/bin/tor(tor_run_main+0x185) [0x556d9b2249d5] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/bin/tor(tor_main+0x47) [0x556d9b221137] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/bin/tor(main+0x1d) [0x556d9b220cdd] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /lib64/libc.so.6(+0x23a90) [0x7f41576eca90] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /lib64/libc.so.6(__libc_start_main+0x89) [0x7f41576ecb49] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/bin/tor(_start+0x25) [0x556d9b220d35] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] conflux_pick_first_leg(): Bug: Matching client sets: (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] conflux_log_set(): Bug: Conflux 9C0E3116B155F149: 0 linked, 0 launched. Delivered: 64; teardown: 0; Current: (nil), Previous: (nil) (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] conflux_pick_first_leg(): Bug: Matching server sets: (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] conflux_log_set(): Bug: Conflux 9C0E3116B155F149: 0 linked, 0 launched. Delivered: 64; teardown: 0; Current: (nil), Previous: (nil) (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] conflux_pick_first_leg(): Bug: End conflux set dump (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] circuit_get_package_window(): Bug: Conflux has no circuit to send on. Circuit 0x556dabdf8710 idx 2651 marked at line src/core/or/command.c:663 (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] tor_bug_occurred_(): Bug: src/core/or/conflux.c:567: conflux_pick_first_leg: Non-fatal assertion !(smartlist_len(cfx->legs) <= 0) failed. (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: Tor 0.4.8.10: Non-fatal assertion !(smartlist_len(cfx->legs) <= 0) failed in conflux_pick_first_leg at src/core/or/conflux.c:567. Stack trace: (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/bin/tor(log_backtrace_impl+0x5b) [0x556d9b298ecb] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/bin/tor(tor_bug_occurred_+0x16d) [0x556d9b2a3d3d] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/bin/tor(conflux_decide_next_circ+0x40e) [0x556d9b33191e] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/bin/tor(circuit_get_package_window+0x6d) [0x556d9b3375ed] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/bin/tor(+0x8ffdc) [0x556d9b249fdc] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/bin/tor(connection_edge_package_raw_inbuf+0xa1) [0x556d9b24c771] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/bin/tor(connection_edge_process_inbuf+0x6e) [0x556d9b34de9e] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/bin/tor(connection_handle_read+0xaf2) [0x556d9b347312] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/bin/tor(+0x6dcce) [0x556d9b227cce] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/lib64/libevent-2.1.so.7(+0x21633) [0x7f4157ecc633] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/lib64/libevent-2.1.so.7(event_base_loop+0x4ff) [0x7f4157ecd18f] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/bin/tor(do_main_loop+0xd1) [0x556d9b228ee1] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/bin/tor(tor_run_main+0x185) [0x556d9b2249d5] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/bin/tor(tor_main+0x47) [0x556d9b221137] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/bin/tor(main+0x1d) [0x556d9b220cdd] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /lib64/libc.so.6(+0x23a90) [0x7f41576eca90] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /lib64/libc.so.6(__libc_start_main+0x89) [0x7f41576ecb49] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] Bug: /usr/bin/tor(_start+0x25) [0x556d9b220d35] (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] conflux_pick_first_leg(): Bug: Matching client sets: (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] conflux_log_set(): Bug: Conflux 9C0E3116B155F149: 0 linked, 0 launched. Delivered: 64; teardown: 0; Current: (nil), Previous: (nil) (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] conflux_pick_first_leg(): Bug: Matching server sets: (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] conflux_log_set(): Bug: Conflux 9C0E3116B155F149: 0 linked, 0 launched. Delivered: 64; teardown: 0; Current: (nil), Previous: (nil) (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] conflux_pick_first_leg(): Bug: End conflux set dump (on Tor 0.4.8.10 )
Jan 19 11:51:07 [Tor] circuit_get_package_window(): Bug: Conflux has no circuit to send on. Circuit 0x556dabdf8710 idx 2651 marked at line src/core/or/command.c:663 (on Tor 0.4.8.10 )
```https://gitlab.torproject.org/tpo/core/tor/-/issues/40907Starting tor with --DormantOnFirstStartup doesn't set GETINFO dormant to 12024-01-24T19:08:12ZCrazyChaozStarting tor with --DormantOnFirstStartup doesn't set GETINFO dormant to 1### Summary
Starting tor with --DormantOnFirstStartup 1 on a new DataDirectory doesn't set GETINFO dormant to 1
### Steps to reproduce:
1. start tor with ```tor --DormantOnFirstStartup 1 --DataDirectory some-empty-dir```
2. do a ```GE...### Summary
Starting tor with --DormantOnFirstStartup 1 on a new DataDirectory doesn't set GETINFO dormant to 1
### Steps to reproduce:
1. start tor with ```tor --DormantOnFirstStartup 1 --DataDirectory some-empty-dir```
2. do a ```GETINFO dormant```
### What is the current bug behavior?
GETINFO dormant returns 0, as if it weren't in dormant mode
### What is the expected behavior?
GETINFO dormant returns 1, as it is in dormant mode
### Environment
- Which version of Tor are you using? Run `tor --version` to get the version if you are unsure.
- 0.4.6.10 and 0.4.5.10
- Which operating system are you using? For example: Debian GNU/Linux 10.1, Windows 10, Ubuntu Xenial, FreeBSD 12.2, etc.
- Pop!_OS 22.04 and Ubuntu 18.04.6
- Which installation method did you use? Distribution package (apt, pkg, homebrew), from source tarball, from Git, etc.
- apt and ?Tor: 0.4.9.x-freezehttps://gitlab.torproject.org/tpo/core/tor/-/issues/40905Bug in connection_connect while entering hibernation2024-01-24T19:15:09ZevequefouBug in connection_connect while entering hibernation### Summary
Relay indicated bug in logging
### Steps to reproduce:
1. Relay configured with AccountingMax on a daily basis
### What is the current bug behavior?
Appears to be attempting to replace a connection closed due to hibernat...### Summary
Relay indicated bug in logging
### Steps to reproduce:
1. Relay configured with AccountingMax on a daily basis
### What is the current bug behavior?
Appears to be attempting to replace a connection closed due to hibernation, which fails because the relay is hibernating. I would assume this is a race condition, as it does not occur consistently.
### What is the expected behavior?
Do not attempt to replace connections closed as part of a hibernation transition.
### Environment
```
Tor version 0.4.8.10.
This build of Tor is covered by the GNU General Public License (https://www.gnu.org/licenses/gpl-3.0.en.html)
Tor is running on Linux with Libevent 2.1.12-stable, OpenSSL 3.0.2, Zlib 1.2.11, Liblzma 5.2.5, Libzstd 1.4.8 and Glibc 2.35 as libc.
Tor compiled with GCC version 11.4.0
```
Installed via `apt` with Tor repo added.
### Relevant logs and/or screenshots
```
Dec 29 08:29:58.000 [warn] tor_bug_occurred_(): Bug: ../src/core/mainloop/connection.c:2204: connection_connect_sockadd\
r: This line should not have been reached. (Future instances of this warning will be silenced.) (on Tor 0.4.8.10 )
Dec 29 08:29:58.000 [warn] Bug: Tor 0.4.8.10: Line unexpectedly reached at connection_connect_sockaddr at ../src/core/m\
ainloop/connection.c:2204. Stack trace: (on Tor 0.4.8.10 )
Dec 29 08:29:58.000 [warn] Bug: /usr/bin/tor(log_backtrace_impl+0x5b) [0x556e5a25d37b] (on Tor 0.4.8.10 )
Dec 29 08:29:58.000 [warn] Bug: /usr/bin/tor(tor_bug_occurred_+0x18a) [0x556e5a27494a] (on Tor 0.4.8.10 )
Dec 29 08:29:58.000 [warn] Bug: /usr/bin/tor(+0x1b5784) [0x556e5a321784] (on Tor 0.4.8.10 )
Dec 29 08:29:58.000 [warn] Bug: /usr/bin/tor(connection_connect+0xdc) [0x556e5a321f6c] (on Tor 0.4.8.10 )
Dec 29 08:29:58.000 [warn] Bug: /usr/bin/tor(connection_or_connect+0x1f1) [0x556e5a33a741] (on Tor 0.4.8.10 )
Dec 29 08:29:58.000 [warn] Bug: /usr/bin/tor(channel_tls_connect+0xaf) [0x556e5a2d4d3f] (on Tor 0.4.8.10 )
Dec 29 08:29:58.000 [warn] Bug: /usr/bin/tor(channel_connect_for_circuit+0x3f) [0x556e5a2de9ff] (on Tor 0.4.8.10 )
Dec 29 08:29:58.000 [warn] Bug: /usr/bin/tor(circuit_handle_first_hop+0x2d9) [0x556e5a2e11b9] (on Tor 0.4.8.10 )
Dec 29 08:29:58.000 [warn] Bug: /usr/bin/tor(circuit_establish_circuit_conflux+0xa0) [0x556e5a2e1550] (on Tor 0.4.8\
.10 )
Dec 29 08:29:58.000 [warn] Bug: /usr/bin/tor(conflux_launch_leg+0x140) [0x556e5a316780] (on Tor 0.4.8.10 )
Dec 29 08:29:58.000 [warn] Bug: /usr/bin/tor(+0x1aae57) [0x556e5a316e57] (on Tor 0.4.8.10 )
Dec 29 08:29:58.000 [warn] Bug: /usr/bin/tor(circuit_mark_for_close_+0x183) [0x556e5a2ea003] (on Tor 0.4.8.10 )
Dec 29 08:29:58.000 [warn] Bug: /usr/bin/tor(circuit_unlink_all_from_channel+0x122) [0x556e5a2ec6d2] (on Tor 0.4.8.\
10 )
Dec 29 08:29:58.000 [warn] Bug: /usr/bin/tor(channel_closed+0x44) [0x556e5a2cd234] (on Tor 0.4.8.10 )
Dec 29 08:29:58.000 [warn] Bug: /usr/bin/tor(connection_or_about_to_close+0x30) [0x556e5a3353a0] (on Tor 0.4.8.10 )
Dec 29 08:29:58.000 [warn] Bug: /usr/bin/tor(+0x707ee) [0x556e5a1dc7ee] (on Tor 0.4.8.10 )
Dec 29 08:29:58.000 [warn] Bug: /usr/bin/tor(+0x70b98) [0x556e5a1dcb98] (on Tor 0.4.8.10 )
Dec 29 08:29:58.000 [warn] Bug: /usr/bin/tor(+0x71eac) [0x556e5a1ddeac] (on Tor 0.4.8.10 )
Dec 29 08:29:58.000 [warn] Bug: /lib/x86_64-linux-gnu/libevent-2.1.so.7(+0x1fe68) [0x7f5d8dae0e68] (on Tor 0.4.8.10\
)
Dec 29 08:29:58.000 [warn] Bug: /lib/x86_64-linux-gnu/libevent-2.1.so.7(event_base_loop+0x577) [0x7f5d8dae28a7] (on\
Tor 0.4.8.10 )
Dec 29 08:29:58.000 [warn] Bug: /usr/bin/tor(do_main_loop+0x127) [0x556e5a1e07c7] (on Tor 0.4.8.10 )
Dec 29 08:29:58.000 [warn] Bug: /usr/bin/tor(tor_run_main+0x215) [0x556e5a1e4805] (on Tor 0.4.8.10 )
Dec 29 08:29:58.000 [warn] Bug: /usr/bin/tor(tor_main+0x4d) [0x556e5a1e4c6d] (on Tor 0.4.8.10 )
Dec 29 08:29:58.000 [warn] Bug: /usr/bin/tor(main+0x1d) [0x556e5a1d6dcd] (on Tor 0.4.8.10 )
Dec 29 08:29:58.000 [warn] Bug: /lib/x86_64-linux-gnu/libc.so.6(+0x29d90) [0x7f5d8d1edd90] (on Tor 0.4.8.10 )
Dec 29 08:29:58.000 [warn] Bug: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80) [0x7f5d8d1ede40] (on Tor 0.\
4.8.10 )
Dec 29 08:29:58.000 [warn] Bug: /usr/bin/tor(_start+0x25) [0x556e5a1d6e25] (on Tor 0.4.8.10 )
```
This error has not recurred since I first encountered it and in the time it took to get a GitLab account the logs have cycled, so I only have the above snippet that I copied out at the time.Tor: 0.4.9.x-freezehttps://gitlab.torproject.org/tpo/core/tor/-/issues/40904--disable-module-dirauth has no effect2024-01-24T19:18:02ZJulien Voisin--disable-module-dirauth has no effectI've [been trying](https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/58264) to disable dirauth support in Alpine's tor package, but it seems that `--disable-module-dirauth` isn't working, as the size of the binary [didn't cha...I've [been trying](https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/58264) to disable dirauth support in Alpine's tor package, but it seems that `--disable-module-dirauth` isn't working, as the size of the binary [didn't change](https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/58264#note_367134) when the flag was added.
Am I doing something wrong? Note that the tests still mention dirauth-related things, so maybe asking the build system to build and run the testsuite makes it ignore `--disable-module-dirauth`?https://gitlab.torproject.org/tpo/core/tor/-/issues/40903Build fails for `ios` & `ios-simulator` on `arm64` with Apple LLVM >> Undefin...2023-12-26T11:37:24Z05nelsonmBuild fails for `ios` & `ios-simulator` on `arm64` with Apple LLVM >> Undefined symbols `___clear_cache`Related: https://gitlab.torproject.org/tpo/core/tor/-/issues/40800
<details>
<summary>make error output</summary>
```
Undefined symbols for architecture arm64:
"___clear_cache", referenced from:
_hashx_compile_a64 in libtor...Related: https://gitlab.torproject.org/tpo/core/tor/-/issues/40800
<details>
<summary>make error output</summary>
```
Undefined symbols for architecture arm64:
"___clear_cache", referenced from:
_hashx_compile_a64 in libtor-testing.a(libhashx_a-compiler_a64.o)
ld: symbol(s) not found for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[1]: *** [Makefile:11733: src/test/test-slow] Error 1
make[1]: *** Waiting for unfinished jobs....
Undefined symbols for architecture arm64:
"___clear_cache", referenced from:
_hashx_compile_a64 in libtor-testing.a(libhashx_a-compiler_a64.o)
ld: symbol(s) not found for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Undefined symbols for architecture arm64:
"___clear_cache", referenced from:
make[1]: *** [Makefile:11087: src/test/fuzz/fuzz-address] Error 1
_hashx_compile_a64 in libtor-testing.a(libhashx_a-compiler_a64.o)
Undefined symbols for architecture arm64:
Undefined symbols for architecture arm64:
"___clear_cache", referenced from:
"___clear_cache", referenced from:
_hashx_compile_a64 in libtor-testing.a(libhashx_a-compiler_a64.o)
_hashx_compile_a64 in libtor-testing.a(libhashx_a-compiler_a64.o)
ld: symbol(s) not found for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
ld: symbol(s) not found for architecture arm64
make[1]: *** [Makefile:11107: src/test/fuzz/fuzz-consensus] Error 1
ld: symbol(s) not found for architecture arm64
clangclang: : error: error: linker command failed with exit code 1 (use -v to see invocation)linker command failed with exit code 1 (use -v to see invocation)
make[1]: *** [Makefile:11097: src/test/fuzz/fuzz-addressPTR] Error 1
make[1]: *** [Makefile:11117: src/test/fuzz/fuzz-descriptor] Error 1
make: *** [Makefile:7647: all] Error 2
```
</details>
The following patch results in a successful build for `ios` and `ios-simulator` arm64 using Apple LLVM (clang 17.0.6)
[0001-Remove-clear_cache.patch](/uploads/5bc8896136569e8a7a30ee93cfd3e423/0001-Remove-clear_cache.patch)
Patch was just to see if it worked for those 2 targets, and it does.
If you have Linux/macOS `x86_64` machine and `docker`, can play with the following:
```
git clone https://github.com/05nelsonm/kmp-tor-resource.git
cd kmp-tor-resource
git checkout issue/8-ios-aarch64-test-patch
./external/task.sh build:ios-simulator:aarch64
./external/task.sh build:ios:aarch64
```
Using `iPhoneOS17.0.sdk` and `iPhoneSimulator17.0.sdk` (darwin `23`)https://gitlab.torproject.org/tpo/core/tor/-/issues/40901Document for the Relay Operator community how to debug relays that are slower...2023-12-19T07:53:56ZAlexander Færøyahf@torproject.orgDocument for the Relay Operator community how to debug relays that are slower than what the operator expectsThis idea origins from a conversation betweeh @beth, @gk and I on #tor-dev today.
We often release new features of C Tor to the relay operators that causes discussions/conversations around whether Tor has gotten faster/slower/uses (more...This idea origins from a conversation betweeh @beth, @gk and I on #tor-dev today.
We often release new features of C Tor to the relay operators that causes discussions/conversations around whether Tor has gotten faster/slower/uses (more|less) memory/crashes (more|less) often/etc. many of these items are hard to give a definitive "yes, the cause of this is X" and it's very time consuming for the Network Team to debug each item individually with the operator.
It would be very useful to have a document in place that informs relay operators about the different situations that may impact performance and how they can get some performance measurements they can then compare to and see if our performance have truly regressed. This can also be used to push MetricsPort to more operators.
We can expand upon the document over time as we discover new ways to do this analysis and/or from feedback from the relay operator community.
This is related to:
- https://lists.torproject.org/pipermail/tor-relays/2023-December/021409.html
- https://lists.torproject.org/pipermail/tor-relays/2023-December/021407.html
This may be relevant to Arti Relay too.
CC @mikeperry for awareness.https://gitlab.torproject.org/tpo/core/tor/-/issues/40896Reject relays running Tor 0.4.7.x2023-12-10T10:39:22ZnonameformeeReject relays running Tor 0.4.7.xSimilar to previous tickets (e.g. #40480, #40559, #40664 and #40760) tor should reject EOL versions of tor at the directory authority level.
According to my understanding of https://gitlab.torproject.org/tpo/core/team/-/wikis/NetworkTea...Similar to previous tickets (e.g. #40480, #40559, #40664 and #40760) tor should reject EOL versions of tor at the directory authority level.
According to my understanding of https://gitlab.torproject.org/tpo/core/team/-/wikis/NetworkTeam/SupportPolicy#types-of-releases (as read 2023-12-03T18:00 UTC), the tor version `0.4.7.x` shall be considered `End of Life (EOL)` three months after the version `0.4.8.x` was released or when the Tor Browser upgraded to this new stable release.
- Tor version 0.4.8 was made stable on August 16th, 2023 according to https://gitlab.torproject.org/tpo/core/team/-/wikis/NetworkTeam/CoreTorReleases, which is now more than three months ago.
- The stable Tor Browser upgraded to use Tor version 0.4.8 back in October, according to https://blog.torproject.org/new-release-tor-browser-130/.
Therefor it is now about time to start work to deprecate the EOL tor version 0.4.7.x (and update the wiki page for CoreTorReleases).https://gitlab.torproject.org/tpo/core/tor/-/issues/40895proxy_prepare_for_restart: Assertion mp->conf_state == PT_PROTO_COMPLETED failed2024-02-01T16:08:54Zdzwdzproxy_prepare_for_restart: Assertion mp->conf_state == PT_PROTO_COMPLETED failedWhen trying to debug an unrelated issue, I found this failed assertion in the logs from when I ran [my webtunnel bridge](https://metrics.torproject.org/rs.html#details/C7EC873896FA23974762C32A18FB45F5520A4F70) for the first time:
```
Nov...When trying to debug an unrelated issue, I found this failed assertion in the logs from when I ran [my webtunnel bridge](https://metrics.torproject.org/rs.html#details/C7EC873896FA23974762C32A18FB45F5520A4F70) for the first time:
```
Nov 18 00:46:19 freeman Tor[1957]: We compiled with OpenSSL 300000b0: OpenSSL 3.0.11 19 Sep 2023 and we are running with OpenSSL 300000b0: 3.0.11. These two versions should be binary compatible.
Nov 18 00:46:19 freeman Tor[1957]: Tor 0.4.8.9 running on Linux with Libevent 2.1.12-stable, OpenSSL 3.0.11, Zlib 1.2.13, Liblzma 5.4.1, Libzstd 1.5.4 and Glibc 2.36 as libc.
[...]
Nov 18 15:54:01 freeman Tor[1957]: Received reload signal (hup). Reloading config and resetting internal state.
Nov 18 15:54:01 freeman Tor[1957]: Read configuration file "/usr/share/tor/tor-service-defaults-torrc".
Nov 18 15:54:01 freeman Tor[1957]: Read configuration file "/etc/tor/torrc".
[...]
Nov 18 15:54:01 freeman Tor[1957]: You are running a new relay. Thanks for helping the Tor network! If you wish to know what will happen in the upcoming weeks regarding its usage, have a look at https://blog.torproject.org/lifecycle-of-a-new-relay
Nov 18 15:54:01 freeman Tor[1957]: It looks like I need to generate and sign a new medium-term signing key, because I don't have one. To do that, I need to load (or create) the permanent master identity key. If the master identity key was not moved or encrypted with a passphrase, this will be done automatically and no further action is required. Otherwise, provide the necessary data using 'tor --keygen' to do it manually.
Nov 18 15:54:01 freeman Tor[1957]: Your Tor server's identity key fingerprint is 'GordonFreeman -snip-'
Nov 18 15:54:01 freeman Tor[1957]: Your Tor bridge's hashed identity key fingerprint is 'GordonFreeman C7EC873896FA23974762C32A18FB45F5520A4F70'
Nov 18 15:54:01 freeman Tor[1957]: Your Tor server's identity key ed25519 fingerprint is 'GordonFreeman -snip-'
Nov 18 15:54:01 freeman Tor[1957]: You can check the status of your bridge relay at https://bridges.torproject.org/status?id=C7EC873896FA23974762C32A18FB45F5520A4F70
Nov 18 15:54:01 freeman Tor[1957]: Configured hibernation. This interval begins at 2023-11-18 00:00:00 and ends at 2023-11-19 00:00:00. We have no prior estimate for bandwidth, so we will start out awake and hibernate when we exhaust our quota.
Nov 18 15:54:01 freeman Tor[1957]: Configured to measure statistics. Look for the *-stats files that will first be written to the data directory in 24 hours from now.
Nov 18 15:54:01 freeman Tor[1957]: Not advertising Directory Service support (Reason: AccountingMax enabled)
Nov 18 15:54:01 freeman Tor[1957]: Now checking whether IPv4 ORPort -snip- is reachable... (this may take up to 20 minutes -- look for log messages indicating success)
Nov 18 15:54:01 freeman Tor[1957]: Managed proxy "/usr/local/bin/webtunnel" process terminated with status code 256
Nov 18 15:54:01 freeman Tor[1957]: tor_assertion_failed_(): Bug: ../src/feature/client/transports.c:519: proxy_prepare_for_restart: Assertion mp->conf_state == PT_PROTO_COMPLETED failed; aborting. (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: Tor 0.4.8.9: Assertion mp->conf_state == PT_PROTO_COMPLETED failed in proxy_prepare_for_restart at ../src/feature/client/transports.c:519: . Stack trace: (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /usr/bin/tor(log_backtrace_impl+0x57) [0x555f172f9ea7] (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /usr/bin/tor(tor_assertion_failed_+0x147) [0x555f17304c57] (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /usr/bin/tor(+0xc7d92) [0x555f172d6d92] (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /usr/bin/tor(+0xc9977) [0x555f172d8977] (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /usr/bin/tor(process_notify_event_exit+0x49) [0x555f1730f319] (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /usr/bin/tor(notify_pending_waitpid_callbacks+0xf7) [0x555f17310cb7] (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /lib/x86_64-linux-gnu/libevent-2.1.so.7(+0x21402) [0x7f92579a6402] (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /lib/x86_64-linux-gnu/libevent-2.1.so.7(event_base_loop+0x49f) [0x7f92579a6c1f] (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /usr/bin/tor(do_main_loop+0xf1) [0x555f17280671] (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /usr/bin/tor(tor_run_main+0x1e5) [0x555f1727bfa5] (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /usr/bin/tor(tor_main+0x59) [0x555f17278329] (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /usr/bin/tor(main+0x19) [0x555f17277ee9] (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /lib/x86_64-linux-gnu/libc.so.6(+0x271ca) [0x7f92570ae1ca] (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85) [0x7f92570ae285] (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /usr/bin/tor(_start+0x21) [0x555f17277f31] (on Tor 0.4.8.9 )
```
I don't quite remember what I was doing at that time. IIRC webtunnel failed to launch because I misconfigured AppArmor?
Potentially related to https://gitlab.torproject.org/tpo/core/tor/-/issues/31091 or https://gitlab.torproject.org/tpo/core/tor/-/issues/32032?Tor: 0.4.9.x-freezeAlexander Færøyahf@torproject.orgAlexander Færøyahf@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40893New conflux links every 30 sec when unused connections2024-01-25T21:08:13ZcypherpunksNew conflux links every 30 sec when unused connectionsAbout 30? minutes without use when tor drops the last connection, it creates 6 new Conflux_linked connections that drops after 30 seconds. Then replaces them with a new set of 6 Conflux_linked only for 30 seconds. And continues this loo...About 30? minutes without use when tor drops the last connection, it creates 6 new Conflux_linked connections that drops after 30 seconds. Then replaces them with a new set of 6 Conflux_linked only for 30 seconds. And continues this loop for ever until normal tor use is resumed.
latest commit tested: cec6f9919d3128646d85c75d08338bea4b31bffa
linux 6.4
This behavior exist at least a couple of months, before the adoption of the 4.8 series from the tor browser.Tor: 0.4.8.x-post-stableMike PerryMike Perryhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40892Tor 0.4.8.9 broken in combination with vanguards2024-01-25T17:51:24ZadrelanosTor 0.4.8.9 broken in combination with vanguards### Summary
Downloads are interrupted after a few seconds.
This bug was introduced between Tor version `0.4.7.16-1` (from Debian `bookworm` security repository) and Tor version `0.4.8.9-1~d12.bookworm+1` (from `deb.torproject.org`). I ...### Summary
Downloads are interrupted after a few seconds.
This bug was introduced between Tor version `0.4.7.16-1` (from Debian `bookworm` security repository) and Tor version `0.4.8.9-1~d12.bookworm+1` (from `deb.torproject.org`). I am certain that I could pinpoint it to it.
The issue is only reproducible if `vanguards` is installed.
The older Tor version from Debian `bookworm` security repository version `0.4.7.16-1` does not have this issue.
### Steps to reproduce:
1. Use a Debian `bookworm`.
2. Enable `deb.torproject.org`
3. `sudo apt update`
4. `sudo apt install --no-install-recommends vanguards tor`
5. Edit `/etc/tor/vanguards.conf` and change `control_socket =` to `control_socket = /run/tor/control` ([related ticket](https://github.com/mikeperry-tor/vanguards/issues/47))
6. `sudo systemctl enable vanguards` ([potential Debian bug not being enabled by default](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=948975))
7. `sudo systemctl restart tor@default`
8. `sudo systemctl restart vanguards`
9. (In App Qube)
10. `torsocks curl --fail --output /tmp/test.tar.xz https://dist.torproject.org/torbrowser/13.0.5/tor-browser-linux-x86_64-13.0.5.tar.xz`
### What is the current bug behavior?
Connection drops after a bit of continued file downloads.
torsocks curl --fail --output /tmp/test.tar.xz https://dist.torproject.org/torbrowser/13.0.5/tor-browser-linux-x86_64-13.0.5.tar.xz
```
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
3 107M 3 3624k 0 0 24100 0 1:17:51 0:02:34 1:15:17 29815
curl: (18) transfer closed with 108874640 bytes remaining to read
zsh: exit 18 torsocks curl --fail --output /tmp/test.tar.xz
```
### What is the expected behavior?
No connection drops.
### Environment
* Qubes R4.2
* Debian based App Qube
* Tor version `0.4.8.9-1~d12.bookworm+1`
* `deb.torproject.org` `bookworm` repository
* vanguards version `0.3.1-2.3` from `packages.debian.org`
Also reproducible in:
* Debian `bookworm` in KVM
* Debian `bookworm` in a Qubes PVH VM
* Qubes-Whonix 17 (Debian `bookworm` based) PVH VM
* Non-Qubes-Whonix (Whonix for VirtualBox)
I wasn't able to reproduce this yet:
* on a real (non-Qubes) Debian `bookworm`
* Debian `bookworm` in a Qubes HVM VM
So it seems that only certain types of VMs (KVM, Qubes PVH, VirtaulBox) are affected. Therefore you might conclude this issue isn't caused by any software developed by The Tor Project. And you might be right about that. However, do you have any insights what code changes might have triggered this issue?
Update: A user in the forums reported having reproduced this on hardware (outside of any VMs) too.
Also reported against Qubes:
[Tor 0.4.8.9 broken in combination with vanguards in Qubes Debian templates](https://github.com/QubesOS/qubes-issues/issues/8726)
### Additional information
`sudo systemctl stop vanguards && sudo systemctl restart tor@default` fixes this issue. This shows that this issue is only happening if Tor is combined with vanguards.Mike PerryMike Perryhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40889If ClientRejectInternalAddresses 1, do not try to fulfill requests to connect...2024-01-24T19:28:31ZpseudonymisaTorIf ClientRejectInternalAddresses 1, do not try to fulfill requests to connect to an RFC6146 addressTo fix problem #40857 consider to not fulfill requests to connect to an RFC6146 address as the exit is expected to deny it anyway by exit policy, at least as long `ClientRejectInternalAddresses` is enabled (by default).
If a Tor Clien...To fix problem #40857 consider to not fulfill requests to connect to an RFC6146 address as the exit is expected to deny it anyway by exit policy, at least as long `ClientRejectInternalAddresses` is enabled (by default).
If a Tor Client ever wants experiment with connecting to NAT64 addresses, they still can try to do so in `TestingTorNetwork`, which would disable this client-side blocking enforcement.