Tor issueshttps://gitlab.torproject.org/tpo/core/tor/-/issues2024-01-30T16:07:48Zhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40661Do phase 3 of proposal 2892024-01-30T16:07:48ZGeorg KoppenDo phase 3 of proposal 289In [prop#289](https://gitlab.torproject.org/tpo/core/torspec/-/blob/main/proposals/289-authenticated-sendmes.txt) we included a deployment timeline. Currently we are at Phase 2 and Phase 3 contains the following:
```
This phase ...In [prop#289](https://gitlab.torproject.org/tpo/core/torspec/-/blob/main/proposals/289-authenticated-sendmes.txt) we included a deployment timeline. Currently we are at Phase 2 and Phase 3 contains the following:
```
This phase will effectively exit() all tor not supporting
"FlowCtrl=1". The earliest date we can do that is when all versions
not supporting v1 are EOL.
According to our release schedule[4], this can happen when our latest
LTS (0.3.5) goes EOL that is on Feb 1st, 2022.
```
We are past Feb 1st, 2022 and 0.3.5.x is gone for good for a while. Let's get to Phase 3 for the next major release at least then.Tor: 0.4.8.x-freezehttps://gitlab.torproject.org/tpo/core/tor/-/issues/40895proxy_prepare_for_restart: Assertion mp->conf_state == PT_PROTO_COMPLETED failed2024-02-01T16:08:54Zdzwdzproxy_prepare_for_restart: Assertion mp->conf_state == PT_PROTO_COMPLETED failedWhen trying to debug an unrelated issue, I found this failed assertion in the logs from when I ran [my webtunnel bridge](https://metrics.torproject.org/rs.html#details/C7EC873896FA23974762C32A18FB45F5520A4F70) for the first time:
```
Nov...When trying to debug an unrelated issue, I found this failed assertion in the logs from when I ran [my webtunnel bridge](https://metrics.torproject.org/rs.html#details/C7EC873896FA23974762C32A18FB45F5520A4F70) for the first time:
```
Nov 18 00:46:19 freeman Tor[1957]: We compiled with OpenSSL 300000b0: OpenSSL 3.0.11 19 Sep 2023 and we are running with OpenSSL 300000b0: 3.0.11. These two versions should be binary compatible.
Nov 18 00:46:19 freeman Tor[1957]: Tor 0.4.8.9 running on Linux with Libevent 2.1.12-stable, OpenSSL 3.0.11, Zlib 1.2.13, Liblzma 5.4.1, Libzstd 1.5.4 and Glibc 2.36 as libc.
[...]
Nov 18 15:54:01 freeman Tor[1957]: Received reload signal (hup). Reloading config and resetting internal state.
Nov 18 15:54:01 freeman Tor[1957]: Read configuration file "/usr/share/tor/tor-service-defaults-torrc".
Nov 18 15:54:01 freeman Tor[1957]: Read configuration file "/etc/tor/torrc".
[...]
Nov 18 15:54:01 freeman Tor[1957]: You are running a new relay. Thanks for helping the Tor network! If you wish to know what will happen in the upcoming weeks regarding its usage, have a look at https://blog.torproject.org/lifecycle-of-a-new-relay
Nov 18 15:54:01 freeman Tor[1957]: It looks like I need to generate and sign a new medium-term signing key, because I don't have one. To do that, I need to load (or create) the permanent master identity key. If the master identity key was not moved or encrypted with a passphrase, this will be done automatically and no further action is required. Otherwise, provide the necessary data using 'tor --keygen' to do it manually.
Nov 18 15:54:01 freeman Tor[1957]: Your Tor server's identity key fingerprint is 'GordonFreeman -snip-'
Nov 18 15:54:01 freeman Tor[1957]: Your Tor bridge's hashed identity key fingerprint is 'GordonFreeman C7EC873896FA23974762C32A18FB45F5520A4F70'
Nov 18 15:54:01 freeman Tor[1957]: Your Tor server's identity key ed25519 fingerprint is 'GordonFreeman -snip-'
Nov 18 15:54:01 freeman Tor[1957]: You can check the status of your bridge relay at https://bridges.torproject.org/status?id=C7EC873896FA23974762C32A18FB45F5520A4F70
Nov 18 15:54:01 freeman Tor[1957]: Configured hibernation. This interval begins at 2023-11-18 00:00:00 and ends at 2023-11-19 00:00:00. We have no prior estimate for bandwidth, so we will start out awake and hibernate when we exhaust our quota.
Nov 18 15:54:01 freeman Tor[1957]: Configured to measure statistics. Look for the *-stats files that will first be written to the data directory in 24 hours from now.
Nov 18 15:54:01 freeman Tor[1957]: Not advertising Directory Service support (Reason: AccountingMax enabled)
Nov 18 15:54:01 freeman Tor[1957]: Now checking whether IPv4 ORPort -snip- is reachable... (this may take up to 20 minutes -- look for log messages indicating success)
Nov 18 15:54:01 freeman Tor[1957]: Managed proxy "/usr/local/bin/webtunnel" process terminated with status code 256
Nov 18 15:54:01 freeman Tor[1957]: tor_assertion_failed_(): Bug: ../src/feature/client/transports.c:519: proxy_prepare_for_restart: Assertion mp->conf_state == PT_PROTO_COMPLETED failed; aborting. (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: Tor 0.4.8.9: Assertion mp->conf_state == PT_PROTO_COMPLETED failed in proxy_prepare_for_restart at ../src/feature/client/transports.c:519: . Stack trace: (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /usr/bin/tor(log_backtrace_impl+0x57) [0x555f172f9ea7] (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /usr/bin/tor(tor_assertion_failed_+0x147) [0x555f17304c57] (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /usr/bin/tor(+0xc7d92) [0x555f172d6d92] (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /usr/bin/tor(+0xc9977) [0x555f172d8977] (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /usr/bin/tor(process_notify_event_exit+0x49) [0x555f1730f319] (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /usr/bin/tor(notify_pending_waitpid_callbacks+0xf7) [0x555f17310cb7] (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /lib/x86_64-linux-gnu/libevent-2.1.so.7(+0x21402) [0x7f92579a6402] (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /lib/x86_64-linux-gnu/libevent-2.1.so.7(event_base_loop+0x49f) [0x7f92579a6c1f] (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /usr/bin/tor(do_main_loop+0xf1) [0x555f17280671] (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /usr/bin/tor(tor_run_main+0x1e5) [0x555f1727bfa5] (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /usr/bin/tor(tor_main+0x59) [0x555f17278329] (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /usr/bin/tor(main+0x19) [0x555f17277ee9] (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /lib/x86_64-linux-gnu/libc.so.6(+0x271ca) [0x7f92570ae1ca] (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85) [0x7f92570ae285] (on Tor 0.4.8.9 )
Nov 18 15:54:01 freeman Tor[1957]: Bug: /usr/bin/tor(_start+0x21) [0x555f17277f31] (on Tor 0.4.8.9 )
```
I don't quite remember what I was doing at that time. IIRC webtunnel failed to launch because I misconfigured AppArmor?
Potentially related to https://gitlab.torproject.org/tpo/core/tor/-/issues/31091 or https://gitlab.torproject.org/tpo/core/tor/-/issues/32032?Tor: 0.4.9.x-freezeAlexander Færøyahf@torproject.orgAlexander Færøyahf@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40892Tor 0.4.8.9 broken in combination with vanguards2024-01-25T17:51:24ZadrelanosTor 0.4.8.9 broken in combination with vanguards### Summary
Downloads are interrupted after a few seconds.
This bug was introduced between Tor version `0.4.7.16-1` (from Debian `bookworm` security repository) and Tor version `0.4.8.9-1~d12.bookworm+1` (from `deb.torproject.org`). I ...### Summary
Downloads are interrupted after a few seconds.
This bug was introduced between Tor version `0.4.7.16-1` (from Debian `bookworm` security repository) and Tor version `0.4.8.9-1~d12.bookworm+1` (from `deb.torproject.org`). I am certain that I could pinpoint it to it.
The issue is only reproducible if `vanguards` is installed.
The older Tor version from Debian `bookworm` security repository version `0.4.7.16-1` does not have this issue.
### Steps to reproduce:
1. Use a Debian `bookworm`.
2. Enable `deb.torproject.org`
3. `sudo apt update`
4. `sudo apt install --no-install-recommends vanguards tor`
5. Edit `/etc/tor/vanguards.conf` and change `control_socket =` to `control_socket = /run/tor/control` ([related ticket](https://github.com/mikeperry-tor/vanguards/issues/47))
6. `sudo systemctl enable vanguards` ([potential Debian bug not being enabled by default](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=948975))
7. `sudo systemctl restart tor@default`
8. `sudo systemctl restart vanguards`
9. (In App Qube)
10. `torsocks curl --fail --output /tmp/test.tar.xz https://dist.torproject.org/torbrowser/13.0.5/tor-browser-linux-x86_64-13.0.5.tar.xz`
### What is the current bug behavior?
Connection drops after a bit of continued file downloads.
torsocks curl --fail --output /tmp/test.tar.xz https://dist.torproject.org/torbrowser/13.0.5/tor-browser-linux-x86_64-13.0.5.tar.xz
```
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
3 107M 3 3624k 0 0 24100 0 1:17:51 0:02:34 1:15:17 29815
curl: (18) transfer closed with 108874640 bytes remaining to read
zsh: exit 18 torsocks curl --fail --output /tmp/test.tar.xz
```
### What is the expected behavior?
No connection drops.
### Environment
* Qubes R4.2
* Debian based App Qube
* Tor version `0.4.8.9-1~d12.bookworm+1`
* `deb.torproject.org` `bookworm` repository
* vanguards version `0.3.1-2.3` from `packages.debian.org`
Also reproducible in:
* Debian `bookworm` in KVM
* Debian `bookworm` in a Qubes PVH VM
* Qubes-Whonix 17 (Debian `bookworm` based) PVH VM
* Non-Qubes-Whonix (Whonix for VirtualBox)
I wasn't able to reproduce this yet:
* on a real (non-Qubes) Debian `bookworm`
* Debian `bookworm` in a Qubes HVM VM
So it seems that only certain types of VMs (KVM, Qubes PVH, VirtaulBox) are affected. Therefore you might conclude this issue isn't caused by any software developed by The Tor Project. And you might be right about that. However, do you have any insights what code changes might have triggered this issue?
Update: A user in the forums reported having reproduced this on hardware (outside of any VMs) too.
Also reported against Qubes:
[Tor 0.4.8.9 broken in combination with vanguards in Qubes Debian templates](https://github.com/QubesOS/qubes-issues/issues/8726)
### Additional information
`sudo systemctl stop vanguards && sudo systemctl restart tor@default` fixes this issue. This shows that this issue is only happening if Tor is combined with vanguards.Mike PerryMike Perryhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40872Add conflux nonce to control port circuit events2023-11-09T16:02:47ZMike PerryAdd conflux nonce to control port circuit eventsWe should add the conflux nonce to control port events and maybe alter CIRC_MINOR so that the linking stages are visible.
Adding a keyword field for the nonce won't break stem.. adding new stages to CIRC_MINOR might :/We should add the conflux nonce to control port events and maybe alter CIRC_MINOR so that the linking stages are visible.
Adding a keyword field for the nonce won't break stem.. adding new stages to CIRC_MINOR might :/Mike PerryMike Perryhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40823Ifdef out/remove conflux validation checks2023-10-19T23:57:18ZMike PerryIfdef out/remove conflux validation checksWe should place the conflux leg validation checks behind either debug or fragile hardening ifdefs, when we get to 0.4.8-stable.We should place the conflux leg validation checks behind either debug or fragile hardening ifdefs, when we get to 0.4.8-stable.Tor: 0.4.8.x-post-stableMike PerryMike Perryhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40788Tor 0.4.7.13 died: Caught signal 112023-06-05T21:11:39ZtoralfTor 0.4.7.13 died: Caught signal 11A bug happened at a newly deployed Tor bridge (Debian 11, Hetzner VPS arm64 with obfs4:
```
May 06 17:19:53.000 [notice] Performing bandwidth self-test...done.
May 06 17:34:28.000 [notice] Received reload signal (hup). Reloading config a...A bug happened at a newly deployed Tor bridge (Debian 11, Hetzner VPS arm64 with obfs4:
```
May 06 17:19:53.000 [notice] Performing bandwidth self-test...done.
May 06 17:34:28.000 [notice] Received reload signal (hup). Reloading config and resetting internal state.
May 06 17:34:28.000 [notice] Read configuration file "/usr/share/tor/tor-service-defaults-torrc".
May 06 17:34:28.000 [notice] Read configuration file "/etc/tor/torrc".
May 06 17:34:28.000 [notice] Tor 0.4.7.13 opening log file.
============================================================ T= 1683394481
Tor 0.4.7.13 died: Caught signal 11
/usr/bin/tor(+0xec4e4)[0xaaaab60ac4e4]
linux-vdso.so.1(__kernel_rt_sigreturn+0x0)[0xffffa48b67d0]
/lib/aarch64-linux-gnu/libc.so.6(+0x752ec)[0xffffa40eb2ec]
/lib/aarch64-linux-gnu/libc.so.6(+0x77b60)[0xffffa40edb60]
/lib/aarch64-linux-gnu/libc.so.6(__libc_malloc+0x198)[0xffffa40eebf8]
/lib/aarch64-linux-gnu/libc.so.6(+0x6bf00)[0xffffa40e1f00]
/lib/aarch64-linux-gnu/libc.so.6(__vasprintf_chk+0x34)[0xffffa4157544]
/usr/bin/tor(tor_vasprintf+0x58)[0xaaaab60c73f8]
/usr/bin/tor(smartlist_add_asprintf+0x94)[0xaaaab60989c4]
/usr/bin/tor(entry_guards_update_state+0x10c)[0xaaaab619dc1c]
/usr/bin/tor(+0xbdd04)[0xaaaab607dd04]
/usr/bin/tor(+0x676d8)[0xaaaab60276d8]
/usr/bin/tor(+0x857cc)[0xaaaab60457cc]
/lib/aarch64-linux-gnu/libevent-2.1.so.7(+0x23600)[0xffffa476c600]
/lib/aarch64-linux-gnu/libevent-2.1.so.7(event_base_loop+0x50c)[0xffffa476cf84]
/usr/bin/tor(do_main_loop+0xec)[0xaaaab602b7b0]
/usr/bin/tor(tor_run_main+0x1c0)[0xaaaab6026d94]
/usr/bin/tor(tor_main+0x54)[0xaaaab6023114]
/usr/bin/tor(main+0x20)[0xaaaab6022c00]
/lib/aarch64-linux-gnu/libc.so.6(__libc_start_main+0xe8)[0xffffa4096e18]
/usr/bin/tor(+0x62c88)[0xaaaab6022c88]
May 06 17:34:41.000 [notice] Tor 0.4.7.13 opening log file.
```
Before this happened Tor + obsf4 were installed, but the wrong systemd service was restarted ("tor@default.service" instead the correct one: "tor")
```
May 06 17:19:52.000 [warn] Server managed proxy encountered a method error. (obfs4 listen tcp 0.0.0.0:443: bind: permission denied)
May 06 17:19:52.000 [warn] Managed proxy '/usr/bin/obfs4proxy' was spawned successfully, but it didn't launch any pluggable transport listeners!
May 06 17:19:52.000 [warn] Pluggable Transport process terminated with status code 65280
```Tor: 0.4.8.x-freezehttps://gitlab.torproject.org/tpo/core/tor/-/issues/40787Onion service Proof of Work effort limit on client side should not be hard-coded2023-10-16T17:12:22ZMicah Elizabeth ScottOnion service Proof of Work effort limit on client side should not be hard-coded### Summary
Right now there's a hardcoded definition in hs_client.c:
`#define CLIENT_MAX_POW_EFFORT 10000`
I put this in after some limited experimentation on the machines I have handy, intending it to be a temporary stand-in for some...### Summary
Right now there's a hardcoded definition in hs_client.c:
`#define CLIENT_MAX_POW_EFFORT 10000`
I put this in after some limited experimentation on the machines I have handy, intending it to be a temporary stand-in for something that takes a couple minutes.
### Steps to reproduce:
On an especially slow client, this hardcoded limit would mean that the hs_client could choose to spend an unacceptably long time on just the PoW solution portion of an onion service introduction.
On an especially fast client with a service under heavy attack, this hardcoded limit may be insufficient.
### What is the current bug behavior?
Effort values over 10000 are truncated to 10000 with a log notice. This value can only be changed by recompiling tor.
### What is the expected behavior?
One thought was we should have a benchmark at some point, but I'd argue there's no good time to do a benchmark. Doing it at startup is wasteful since most users won't need it, and doing it on first use will add significant latency to even a low-effort connection.
I propose a limited initial max effort (perhaps 1000 or so) along with an independent mechanism for keeping a running benchmark of the equix_solve() function. This can be used to calculate an approximate effort limit given a desired time limit.
That time limit can be more legitimately hardcoded, since it's related to other hardcoded timers and not to the client's CPU performance.
Relatedly we should have a mechanism for cancelling queued PoW solves. If we had a robust cancellation mechanism, is it possible we want to remove the client max effort entirely? This would simply trust the server's suggested effort, and if that is in fact just an arbitrary large number the worst outcome is that we waste CPU until the request is cancelled.
I think this would be ideal, avoiding any kind of preset upper limit.https://gitlab.torproject.org/tpo/core/tor/-/issues/40784Metricsport metrics for conflux2023-10-19T23:55:01ZMike PerryMetricsport metrics for confluxWe should add metricsport metrics for conflux, to help with monitoring things like switch frequency, reason for switching (cwnd, alg, orconn block), leg failure rates, set failures rates, leg reattachment, etc.
I think this is likely to...We should add metricsport metrics for conflux, to help with monitoring things like switch frequency, reason for switching (cwnd, alg, orconn block), leg failure rates, set failures rates, leg reattachment, etc.
I think this is likely to be a combination of @dgoulet and me doing this. I can do the ones for the algs, and he can do the ones for the pool. However, I am assigning it to me for now so as I don't lose it.
We don't need it by the first alpha, but should definitely have it by the stable.Tor: 0.4.8.x-freezeMike PerryMike Perryhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40636reports that relays not obeying DoSConnectionMaxConcurrentCount2023-06-15T10:31:28ZRoger Dingledinereports that relays not obeying DoSConnectionMaxConcurrentCounttoralf and steinex and others on irc report that currently their Tor relays have many dozens of IP addresses that have more than 50 concurrent connections open to their relay.
This is surprising, since we have set
```
DoSConnectionEnabl...toralf and steinex and others on irc report that currently their Tor relays have many dozens of IP addresses that have more than 50 concurrent connections open to their relay.
This is surprising, since we have set
```
DoSConnectionEnabled=1 DoSConnectionMaxConcurrentCount=50
```
in the consensus.
Maybe there is a bug?https://gitlab.torproject.org/tpo/core/tor/-/issues/40622Connection DDoS defenses never applied to DirPort so dir auths still impacted2023-09-11T19:14:14ZRoger DingledineConnection DDoS defenses never applied to DirPort so dir auths still impactedBack in the connection DDoS problems a year or two back, I hacked up a defense which counted total connections from a given IP address to either my ORPort or my DirPort, and refused future connections for a while if the connection attemp...Back in the connection DDoS problems a year or two back, I hacked up a defense which counted total connections from a given IP address to either my ORPort or my DirPort, and refused future connections for a while if the connection attempt count trips some token-bucket-style threshold.
That threshold has tripped a lot lately and I have been saving myself a lot of connections on moria1 (along with the accompanying bandwidth, memory bloat, upstream problems with connection tables, etc). Here is the latest stat line after 43 days:
```
Jun 06 19:29:31.165 [notice] DoS mitigation since startup: 0 circuits killed with too many cells. 0 circuits rejected, 0 marked addresses. 749704 connections closed because concurrent, 28222170 connections closed because total. 0 single hop clients refused. 0 INTRODUCE2 rejected.
```
(Yes, I have been turning away 7 to 8 connections per second, continuously, averaged over the last 43 days. Some addresses are super loud and have no notion of back-off.)
Alas, when I looked earlier, the patch that went into actual Tor only considered ORPort connections, and so it doesn't help me because a lot of my overload is on the DirPort.
Double-alas, my patch and the patch that went in are full of conflicts.
So: I have been meaning to reverse engineer the patch that went in to 0.4.6.1-alpha (#40253), and my patch (in my moria1-0460 branch), and figure out what is actually missing, and make a new patch that applies cleanly to a more recent Tor.
That step doesn't fit in my q1-q2 task set though, so I have been meaning to get to it in q3-q4.
But the network is now all using consensus method 32 (for the MiddleOnly flag), so I need to do something sooner.Roger DingledineRoger Dingledinehttps://gitlab.torproject.org/tpo/core/tor/-/issues/40558Tor hanging in early in bootstrapping, not leaving main loop2024-03-05T15:21:12ZgroteTor hanging in early in bootstrapping, not leaving main loop### Summary
Sometimes, Tor (on Android) hangs in bootstrapping and does not continue even if waiting for a long time. Easier to reproduce by cleaning all filesystem state. No more log output, not even `DEBUG` debug events get logged ove...### Summary
Sometimes, Tor (on Android) hangs in bootstrapping and does not continue even if waiting for a long time. Easier to reproduce by cleaning all filesystem state. No more log output, not even `DEBUG` debug events get logged over the control connection.
### Steps to reproduce:
1. Download or compile Onionshare for Android: https://github.com/onionshare/onionshare-android
2. Clear all app data
3. Share a file
4. Hope for progress bar and logcat output to hang
5. If not, go back to 2.
### What is the current bug behavior?
Tor doesn't exit main loop, doesn't bootstrap and just hangs.
### What is the expected behavior?
Tor bootstraps.
### Environment
- Which version of Tor are you using? 0.4.6.8
- Which operating system are you using? Android via tor-android library from GuardianProject
- Which installation method did you use? Built into OnionShare app
### Relevant logs and/or screenshots
```
2022-02-01 17:32:41.955 D: DEBUG: conn_read_callback: socket -1 wants to read.
2022-02-01 17:32:41.955 D: DEBUG: connection_dir_process_inbuf: Got data, not eof. Leaving on inbuf.
2022-02-01 17:32:41.955 D: DEBUG: sendme_connection_edge_consider_sending: Outbuf 0, queuing stream SENDME.
2022-02-01 17:32:41.956 D: DEBUG: relay_send_command_from_edge_: delivering 5 cell forward.
2022-02-01 17:32:41.956 D: DEBUG: relay_encrypt_cell_outbound: encrypting a layer of the relay cell.
2022-02-01 17:32:41.956 D: DEBUG: conn_read_callback: socket 75 wants to read.
2022-02-01 17:32:41.956 D: DEBUG: connection_buf_read_from_socket: 75: starting, inbuf_datalen 258 (0 pending in tls object). at_most 2072.
2022-02-01 17:32:41.956 D: DEBUG: tor_tls_read: read returned r=-1, err=-2
2022-02-01 17:32:41.956 D: DEBUG: connection_buf_read_from_socket: After TLS read of 770: 3100 read, 0 written
2022-02-01 17:32:41.956 D: DEBUG: connection_or_process_cells_from_inbuf: 75: starting, inbuf_datalen 1028 (0 pending in tls object).
2022-02-01 17:32:41.956 D: DEBUG: channel_process_cell: Processing incoming cell_t 0x7101bb2658 for channel 0xb40000725ab36a00 (global ID 5)
2022-02-01 17:32:41.956 D: DEBUG: circuit_get_by_circid_channel_impl: circuit_get_by_circid_channel_impl() returning circuit 0xb40000725ab377e0 for circ_id 2917690044, channel ID 5 (0xb40000725ab36a00)
2022-02-01 17:32:41.956 D: DEBUG: relay_lookup_conn: found conn for stream 173.
2022-02-01 17:32:41.956 D: DEBUG: circuit_receive_relay_cell: Sending to origin.
2022-02-01 17:32:41.956 D: DEBUG: connection_edge_process_relay_cell: Now seen 1845 relay cells here (command 2, stream 173).
2022-02-01 17:32:41.956 D: DEBUG: sendme_circuit_data_received: Circuit deliver_window now 965.
2022-02-01 17:32:41.956 D: DEBUG: connection_or_process_cells_from_inbuf: 75: starting, inbuf_datalen 514 (0 pending in tls object).
2022-02-01 17:32:41.956 D: DEBUG: channel_process_cell: Processing incoming cell_t 0x7101bb2658 for channel 0xb40000725ab36a00 (global ID 5)
2022-02-01 17:32:41.957 D: DEBUG: circuit_get_by_circid_channel_impl: circuit_get_by_circid_channel_impl() returning circuit 0xb40000725ab377e0 for circ_id 2917690044, channel ID 5 (0xb40000725ab36a00)
2022-02-01 17:32:41.957 D: DEBUG: relay_lookup_conn: found conn for stream 173.
2022-02-01 17:32:41.957 D: DEBUG: circuit_receive_relay_cell: Sending to origin.
2022-02-01 17:32:41.957 D: DEBUG: connection_edge_process_relay_cell: Now seen 1846 relay cells here (command 2, stream 173).
2022-02-01 17:32:41.957 D: DEBUG: sendme_circuit_data_received: Circuit deliver_window now 964.
2022-02-01 17:32:41.957 D: DEBUG: connection_or_process_cells_from_inbuf: 75: starting, inbuf_datalen 0 (0 pending in tls object).
2022-02-01 17:32:41.957 D: DEBUG: conn_read_callback: socket -1 wants to read.
2022-02-01 17:32:41.957 D: DEBUG: connection_dir_client_reached_eof: Downloaded 171820 bytes on connection of purpose microdescriptor fetch; bootstrap 50%
2022-02-01 17:32:41.957 D: DEBUG: fetch_from_buf_http: headerlen 194, bodylen 171626.
2022-02-01 17:32:41.957 D: DEBUG: Received response on Directory connection (client reading) with 144.202.49.171:9001: 200 "OK" (purpose: 19, response size: 171820, compression: 4)
2022-02-01 17:32:41.957 D: INFO: make_guard_confirmed: Marking dongusvondong ($E9A90B9FF0574410695BA5F5C8354368221A7114) as a confirmed guard (index 0)
2022-02-01 17:32:41.957 D: INFO: entry_guards_update_primary: Primary entry guards have changed. New primary guard list is:
2022-02-01 17:32:41.957 D: INFO: entry_guards_update_primary: 1/3: dongusvondong ($E9A90B9FF0574410695BA5F5C8354368221A7114) (confirmed)
2022-02-01 17:32:41.957 D: INFO: entry_guards_update_primary: 2/3: digitalResistanceRu ($841DE1E065BCA8C1EBDC4A7140646F47A6257BD8)
2022-02-01 17:32:41.958 D: INFO: entry_guards_update_primary: 3/3: WonderWoman42 ($6D8DA81B9572023472C47B16E30D6A882B896B4E)
2022-02-01 17:32:41.958 D: INFO: entry_guards_note_guard_success: Recorded success for primary confirmed guard dongusvondong ($E9A90B9FF0574410695BA5F5C8354368221A7114)
2022-02-01 17:32:41.958 D: DEBUG: connection_dir_client_reached_eof: Time on received directory is within tolerance; we are -3 seconds skewed. (That's okay.)
2022-02-01 17:32:41.958 D: DEBUG: HTTP body from Directory connection (client reading) with 144.202.49.171:9001 was labeled as Zstandard compressed, and it seems to be Zstandard compressed.
2022-02-01 17:32:41.958 D: INFO: handle_response_fetch_microdesc: Received answer to microdescriptor request (status 200, body size 670229) from server 144.202.49.171:9001
2022-02-01 17:32:41.958 D: DEBUG: count_usable_descriptors: 6845 usable, 500 present (microdescs).
2022-02-01 17:32:41.958 D: DEBUG: compute_frac_paths_available: mid: 0 present, 0 usable
2022-02-01 17:32:41.958 D: DEBUG: compute_frac_paths_available: guard: 3913 possible
2022-02-01 17:32:41.958 D: DEBUG: count_usable_descriptors: 1389 usable, 96 present (microdesc exit policies and flags).
2022-02-01 17:32:41.958 D: DEBUG: compute_frac_paths_available: exits: 96 present, 1389 usable
2022-02-01 17:32:41.958 D: DEBUG: compute_weighted_bandwidths: Generated weighted bandwidths for rule weight as guard based on weights Wg=0.600700 Wm=0.600700 We=0.000000 Wd=0.000000 with total bw 33039303135.900002
2022-02-01 17:32:41.958 D: DEBUG: compute_weighted_bandwidths: Generated weighted bandwidths for rule weight as middle node based on weights Wg=0.399300 Wm=1.000000 We=0.000000 Wd=0.000000 with total bw 33038899864.099998
2022-02-01 17:32:41.959 D: DEBUG: compute_weighted_bandwidths: Generated weighted bandwidths for rule weight as exit based on weights Wg=1.000000 Wm=1.000000 We=1.000000 Wd=1.000000 with total bw 27523872000.000000
2022-02-01 17:32:41.959 D: DEBUG: compute_frac_paths_available: f_guard: 0.08, f_mid: 0.09, f_exit: 0.07
2022-02-01 17:32:41.959 D: DEBUG: update_guard_selection_choice: Staying with guard context "default" (no change)
2022-02-01 17:32:41.959 D: INFO: sampled_guards_update_from_consensus: Updating sampled guard status based on received consensus.
2022-02-01 17:32:41.959 D: DEBUG: sampled_guards_update_consensus_presence: Sampled guard digitalResistanceRu ($841DE1E065BCA8C1EBDC4A7140646F47A6257BD8) is still listed.
2022-02-01 17:32:41.959 D: DEBUG: sampled_guards_update_consensus_presence: Sampled guard WonderWoman42 ($6D8DA81B9572023472C47B16E30D6A882B896B4E) is still listed.
2022-02-01 17:32:41.959 D: DEBUG: sampled_guards_update_consensus_presence: Sampled guard dongusvondong ($E9A90B9FF0574410695BA5F5C8354368221A7114) is still listed.
2022-02-01 17:32:41.959 D: DEBUG: sampled_guards_update_consensus_presence: Sampled guard lotsaSquirrels ($CFF88D79C485653DBECCC0EA1474E975EDAB5D45) is still listed.
2022-02-01 17:32:41.959 D: DEBUG: sampled_guards_update_consensus_presence: Sampled guard Init6TorRelay ($A22B1C2EF2255987F8AB8AA0B1A8E23F5023EEB8) is still listed.
2022-02-01 17:32:41.959 D: DEBUG: sampled_guards_update_consensus_presence: Sampled guard Fission11 ($929BB84A68198CE35E2F2828812840AF5C2CBC4A) is still listed.
2022-02-01 17:32:41.959 D: DEBUG: sampled_guards_update_consensus_presence: Sampled guard reallyniceplace ($049852A10805835E9AB2C551D6C438BE6DE8B24C) is still listed.
2022-02-01 17:32:41.959 D: DEBUG: sampled_guards_update_consensus_presence: Sampled guard dr167 ($9ED8E45D2C666F019C17F23FF567E9EB1E7DADB8) is still listed.
2022-02-01 17:32:41.959 D: DEBUG: sampled_guards_update_consensus_presence: Sampled guard Unnamed ($A56A0120A21557832000D5C68FCD7DE08E82A3B0) is still listed.
2022-02-01 17:32:41.959 D: DEBUG: sampled_guards_update_consensus_presence: Sampled guard torrx ($772D509772071D3082BAAF1C781C8F917013F3CB) is still listed.
2022-02-01 17:32:41.960 D: DEBUG: sampled_guards_update_consensus_presence: Sampled guard Piratenpartei08 ($E8965A79FB2F335194141E8968755524840C44B6) is still listed.
2022-02-01 17:32:41.960 D: DEBUG: sampled_guards_update_consensus_presence: Sampled guard zwiebeltoralf ($63BF46A63F9C21FD315CD061B3EAA3EB05283A0A) is still listed.
2022-02-01 17:32:41.960 D: DEBUG: sampled_guards_update_consensus_presence: Sampled guard MIGHTYWANG ($9B2BC7EFD661072AFADC533BE8DCF1C19D8C2DCC) is still listed.
2022-02-01 17:32:41.960 D: DEBUG: sampled_guards_update_consensus_presence: Sampled guard falared ($7EB91FDB68EAD3A13808D9356DA82605773215F5) is still listed.
2022-02-01 17:32:41.960 D: DEBUG: sampled_guards_update_consensus_presence: Sampled guard Chenjesu ($5DB9AE27A44EB7B476CC04A66C67A71C97A001E6) is still listed.
2022-02-01 17:32:41.960 D: DEBUG: sampled_guards_update_consensus_presence: Sampled guard Route66 ($0CE3B73DDFF92FF6653A9C7F9003A32BB6C2364D) is still listed.
2022-02-01 17:32:41.960 D: DEBUG: sampled_guards_update_consensus_presence: Sampled guard cosimaniehaus ($24F97F98C45E4754655BE66799049763DAEE99CE) is still listed.
2022-02-01 17:32:41.960 D: DEBUG: sampled_guards_update_consensus_presence: Sampled guard inconnu3 ($3ECAF1EAA2EE2C6A3DD191C820788C93386C24F9) is still listed.
2022-02-01 17:32:41.960 D: DEBUG: sampled_guards_update_consensus_presence: Sampled guard tirz ($124948FDBA0B63F23A2FB7DFC7DF62ED499DE546) is still listed.
2022-02-01 17:32:41.960 D: DEBUG: sampled_guards_update_consensus_presence: Sampled guard bonjour1 ($D80EA21626BFAE8044E4037FE765252E157E3586) is still listed.
2022-02-01 17:32:41.960 D: DEBUG: entry_guard_set_filtered_flags: Updated sampled guard digitalResistanceRu ($841DE1E065BCA8C1EBDC4A7140646F47A6257BD8): filtered=1; reachable_filtered=1.
2022-02-01 17:32:41.961 D: DEBUG: entry_guard_set_filtered_flags: Updated sampled guard WonderWoman42 ($6D8DA81B9572023472C47B16E30D6A882B896B4E): filtered=1; reachable_filtered=1.
2022-02-01 17:32:41.961 D: DEBUG: entry_guard_set_filtered_flags: Updated sampled guard dongusvondong ($E9A90B9FF0574410695BA5F5C8354368221A7114): filtered=1; reachable_filtered=1.
2022-02-01 17:32:41.961 D: DEBUG: entry_guard_set_filtered_flags: Updated sampled guard lotsaSquirrels ($CFF88D79C485653DBECCC0EA1474E975EDAB5D45): filtered=1; reachable_filtered=1.
2022-02-01 17:32:41.961 D: DEBUG: entry_guard_set_filtered_flags: Updated sampled guard Init6TorRelay ($A22B1C2EF2255987F8AB8AA0B1A8E23F5023EEB8): filtered=1; reachable_filtered=1.
2022-02-01 17:32:41.961 D: DEBUG: entry_guard_set_filtered_flags: Updated sampled guard Fission11 ($929BB84A68198CE35E2F2828812840AF5C2CBC4A): filtered=1; reachable_filtered=1.
2022-02-01 17:32:41.961 D: DEBUG: entry_guard_set_filtered_flags: Updated sampled guard reallyniceplace ($049852A10805835E9AB2C551D6C438BE6DE8B24C): filtered=1; reachable_filtered=1.
2022-02-01 17:32:41.961 D: DEBUG: entry_guard_set_filtered_flags: Updated sampled guard dr167 ($9ED8E45D2C666F019C17F23FF567E9EB1E7DADB8): filtered=1; reachable_filtered=1.
2022-02-01 17:32:41.961 D: DEBUG: entry_guard_set_filtered_flags: Updated sampled guard Unnamed ($A56A0120A21557832000D5C68FCD7DE08E82A3B0): filtered=1; reachable_filtered=1.
2022-02-01 17:32:41.961 D: DEBUG: entry_guard_set_filtered_flags: Updated sampled guard torrx ($772D509772071D3082BAAF1C781C8F917013F3CB): filtered=1; reachable_filtered=1.
2022-02-01 17:32:41.961 D: DEBUG: entry_guard_set_filtered_flags: Updated sampled guard Piratenpartei08 ($E8965A79FB2F335194141E8968755524840C44B6): filtered=1; reachable_filtered=1.
2022-02-01 17:32:41.961 D: DEBUG: entry_guard_set_filtered_flags: Updated sampled guard zwiebeltoralf ($63BF46A63F9C21FD315CD061B3EAA3EB05283A0A): filtered=1; reachable_filtered=1.
2022-02-01 17:32:41.961 D: DEBUG: entry_guard_set_filtered_flags: Updated sampled guard MIGHTYWANG ($9B2BC7EFD661072AFADC533BE8DCF1C19D8C2DCC): filtered=1; reachable_filtered=1.
2022-02-01 17:32:41.961 D: DEBUG: entry_guard_set_filtered_flags: Updated sampled guard falared ($7EB91FDB68EAD3A13808D9356DA82605773215F5): filtered=1; reachable_filtered=1.
2022-02-01 17:32:41.961 D: DEBUG: entry_guard_set_filtered_flags: Updated sampled guard Chenjesu ($5DB9AE27A44EB7B476CC04A66C67A71C97A001E6): filtered=1; reachable_filtered=1.
2022-02-01 17:32:41.962 D: DEBUG: entry_guard_set_filtered_flags: Updated sampled guard Route66 ($0CE3B73DDFF92FF6653A9C7F9003A32BB6C2364D): filtered=1; reachable_filtered=1.
2022-02-01 17:32:41.962 D: DEBUG: entry_guard_set_filtered_flags: Updated sampled guard cosimaniehaus ($24F97F98C45E4754655BE66799049763DAEE99CE): filtered=1; reachable_filtered=1.
2022-02-01 17:32:41.962 D: DEBUG: entry_guard_set_filtered_flags: Updated sampled guard inconnu3 ($3ECAF1EAA2EE2C6A3DD191C820788C93386C24F9): filtered=1; reachable_filtered=1.
2022-02-01 17:32:41.962 D: DEBUG: entry_guard_set_filtered_flags: Updated sampled guard tirz ($124948FDBA0B63F23A2FB7DFC7DF62ED499DE546): filtered=1; reachable_filtered=1.
2022-02-01 17:32:41.962 D: DEBUG: entry_guard_set_filtered_flags: Updated sampled guard bonjour1 ($D80EA21626BFAE8044E4037FE765252E157E3586): filtered=1; reachable_filtered=1.
2022-02-01 17:32:41.962 D: INFO: entry_guards_update_primary: Primary entry guards have changed. New primary guard list is:
2022-02-01 17:32:41.962 D: INFO: entry_guards_update_primary: 1/3: dongusvondong ($E9A90B9FF0574410695BA5F5C8354368221A7114) (confirmed)
2022-02-01 17:32:41.962 D: INFO: entry_guards_update_primary: 2/3: digitalResistanceRu ($841DE1E065BCA8C1EBDC4A7140646F47A6257BD8)
2022-02-01 17:32:41.962 D: INFO: entry_guards_update_primary: 3/3: WonderWoman42 ($6D8DA81B9572023472C47B16E30D6A882B896B4E)
2022-02-01 17:32:41.962 D: DEBUG: count_usable_descriptors: 6845 usable, 500 present (microdescs).
2022-02-01 17:32:41.962 D: DEBUG: compute_frac_paths_available: mid: 0 present, 0 usable
2022-02-01 17:32:41.962 D: DEBUG: compute_frac_paths_available: guard: 3913 possible
2022-02-01 17:32:41.962 D: DEBUG: count_usable_descriptors: 1389 usable, 96 present (microdesc exit policies and flags).
2022-02-01 17:32:41.963 D: DEBUG: compute_frac_paths_available: exits: 96 present, 1389 usable
2022-02-01 17:32:41.963 D: DEBUG: compute_weighted_bandwidths: Generated weighted bandwidths for rule weight as guard based on weights Wg=0.600700 Wm=0.600700 We=0.000000 Wd=0.000000 with total bw 33039303135.900002
2022-02-01 17:32:41.963 D: DEBUG: compute_weighted_bandwidths: Generated weighted bandwidths for rule weight as middle node based on weights Wg=0.399300 Wm=1.000000 We=0.000000 Wd=0.000000 with total bw 33038899864.099998
2022-02-01 17:32:41.963 D: DEBUG: compute_weighted_bandwidths: Generated weighted bandwidths for rule weight as exit based on weights Wg=1.000000 Wm=1.000000 We=1.000000 Wd=1.000000 with total bw 27523872000.000000
2022-02-01 17:32:41.963 D: DEBUG: compute_frac_paths_available: f_guard: 0.08, f_mid: 0.09, f_exit: 0.07
2022-02-01 17:32:41.963 D: INFO: I learned some more directory information, but not enough to build a circuit: We need more microdescriptors: we have 500/6845, and can only build 0% of likely paths. (We have 7% of guards bw, 8% of midpoint bw, and 7% of exit bw = 0% of path bw.)
2022-02-01 17:32:41.963 D: DEBUG: conn_close_if_marked: Cleaning up connection (fd -1).
2022-02-01 17:32:41.963 D: DEBUG: connection_remove: removing socket -1 (type Directory), n_conns now 37
2022-02-01 17:32:41.963 D: INFO: connection_free_minimal: Freeing linked Directory connection [client finished] with 0 bytes on inbuf, 0 on outbuf.
2022-02-01 17:32:41.963 D: DEBUG: conn_read_callback: socket -1 wants to read.
2022-02-01 17:32:41.963 D: DEBUG: connection_dir_process_inbuf: Got data, not eof. Leaving on inbuf.
2022-02-01 17:32:41.963 D: DEBUG: scheduler_evt_callback: Scheduler event callback called
2022-02-01 17:32:41.963 D: DEBUG: update_socket_info: chan=5 updated socket info, limit: 34752, cwnd: 0, unacked: 0, notsent: 0, mss: 0
2022-02-01 17:32:41.963 D: DEBUG: kist_scheduler_run: Running the scheduler. 1 channels pending
2022-02-01 17:32:41.963 D: DEBUG: outbuf_table_add: scheduler init outbuf info for chan=5
2022-02-01 17:32:41.964 D: DEBUG: channel_write_packed_cell: Writing 0xb40000725ab49b40 to channel 0xb40000725ab36a00 with global ID 5
2022-02-01 17:32:41.964 D: DEBUG: update_socket_written: chan=5 wrote 543 bytes, old was 0
2022-02-01 17:32:41.964 D: DEBUG: scheduler_set_channel_state: chan 5 changed from scheduler state PENDING to PENDING
2022-02-01 17:32:41.964 D: DEBUG: channel_write_packed_cell: Writing 0xb40000725ab45390 to channel 0xb40000725ab36a00 with global ID 5
2022-02-01 17:32:41.964 D: DEBUG: update_socket_written: chan=5 wrote 543 bytes, old was 543
2022-02-01 17:32:41.964 D: DEBUG: scheduler_set_channel_state: chan 5 changed from scheduler state PENDING to PENDING
2022-02-01 17:32:41.964 D: DEBUG: channel_write_packed_cell: Writing 0xb40000725ab46d00 to channel 0xb40000725ab36a00 with global ID 5
2022-02-01 17:32:41.964 D: DEBUG: channel_flush_from_first_active_circuit: Made a circuit inactive.
2022-02-01 17:32:41.964 D: DEBUG: update_socket_written: chan=5 wrote 543 bytes, old was 1086
2022-02-01 17:32:41.964 D: DEBUG: scheduler_set_channel_state: chan 5 changed from scheduler state PENDING to WAITING_FOR_CELLS
2022-02-01 17:32:41.964 D: DEBUG: channel_write_to_kernel: Writing 1542 bytes to kernel for chan 5
2022-02-01 17:32:41.964 D: DEBUG: flush_chunk_tls: flushed 1542 bytes, 0 remain.
2022-02-01 17:32:41.964 D: DEBUG: connection_handle_write_impl: After TLS write of 1542: 0 read, 1571 written
2022-02-01 17:32:41.964 D: DEBUG: free_outbuf_info_by_ent: Freeing outbuf table entry from chan=5
2022-02-01 17:32:41.965 D: DEBUG: kist_scheduler_run: len pending=0, len to_readd=-1
2022-02-01 17:32:41.965 D: DEBUG: conn_read_callback: socket 75 wants to read.
```
```
2022-02-01 17:32:42.257 D: DEBUG: connection_or_process_cells_from_inbuf: 75: starting, inbuf_datalen 2442 (0 pending in tls object).
2022-02-01 17:32:42.257 D: DEBUG: channel_process_cell: Processing incoming cell_t 0x7101bb2658 for channel 0xb40000725ab36a00 (global ID 5)
2022-02-01 17:32:42.257 D: DEBUG: circuit_get_by_circid_channel_impl: circuit_get_by_circid_channel_impl() returning circuit 0xb40000725ab377e0 for circ_id 2917690044, channel ID 5 (0xb40000725ab36a00)
2022-02-01 17:32:42.257 D: DEBUG: relay_lookup_conn: found conn for stream 171.
2022-02-01 17:32:42.257 D: DEBUG: circuit_receive_relay_cell: Sending to origin.
2022-02-01 17:32:42.257 D: DEBUG: connection_edge_process_relay_cell: Now seen 2110 relay cells here (command 2, stream 171).
2022-02-01 17:32:42.257 D: DEBUG: sendme_circuit_data_received: Circuit deliver_window now 900.
2022-02-01 17:32:42.258 D: DEBUG: sendme_circuit_consider_sending: Queuing circuit sendme.
2022-02-01 17:32:42.258 D: DEBUG: send_circuit_level_sendme: Emitting SENDME version 1 cell.
2022-02-01 17:32:42.258 D: DEBUG: relay_send_command_from_edge_: delivering 5 cell forward.
2022-02-01 17:32:42.258 D: DEBUG: relay_encrypt_cell_outbound: encrypting a layer of the relay cell.
2022-02-01 17:32:42.258 D: DEBUG: connection_or_process_cells_from_inbuf: 75: starting, inbuf_datalen 1928 (0 pending in tls object).
2022-02-01 17:32:42.258 D: DEBUG: channel_process_cell: Processing incoming cell_t 0x7101bb2658 for channel 0xb40000725ab36a00 (global ID 5)
2022-02-01 17:32:42.258 D: DEBUG: circuit_get_by_circid_channel_impl: circuit_get_by_circid_channel_impl() returning circuit 0xb40000725ab377e0 for circ_id 2917690044, channel ID 5 (0xb40000725ab36a00)
2022-02-01 17:32:42.258 D: DEBUG: relay_lookup_conn: found conn for stream 171.
2022-02-01 17:32:42.258 D: DEBUG: circuit_receive_relay_cell: Sending to origin.
2022-02-01 17:32:42.258 D: DEBUG: connection_edge_process_relay_cell: Now seen 2111 relay cells here (command 2, stream 171).
2022-02-01 17:32:42.258 D: DEBUG: sendme_circuit_data_received: Circuit deliver_window now 999.
2022-02-01 17:32:42.258 D: DEBUG: connection_or_process_cells_from_inbuf: 75: starting, inbuf_datalen 1414 (0 pending in tls object).
2022-02-01 17:32:42.258 D: DEBUG: channel_process_cell: Processing incoming cell_t 0x7101bb2658 for channel 0xb40000725ab36a00 (global ID 5)
2022-02-01 17:32:42.259 D: DEBUG: circuit_get_by_circid_channel_impl: circuit_get_by_circid_channel_impl() returning circuit 0xb40000725ab377e0 for circ_id 2917690044, channel ID 5 (0xb40000725ab36a00)
2022-02-01 17:32:42.259 D: DEBUG: relay_lookup_conn: found conn for stream 171.
2022-02-01 17:32:42.259 D: DEBUG: circuit_receive_relay_cell: Sending to origin.
2022-02-01 17:32:42.259 D: DEBUG: connection_edge_process_relay_cell: Now seen 2112 relay cells here (command 2, stream 171).
2022-02-01 17:32:42.259 D: DEBUG: sendme_circuit_data_received: Circuit deliver_window now 998.
2022-02-01 17:32:42.259 D: DEBUG: connection_or_process_cells_from_inbuf: 75: starting, inbuf_datalen 900 (0 pending in tls object).
2022-02-01 17:32:42.259 D: DEBUG: channel_process_cell: Processing incoming cell_t 0x7101bb2658 for channel 0xb40000725ab36a00 (global ID 5)
2022-02-01 17:32:42.259 D: DEBUG: circuit_get_by_circid_channel_impl: circuit_get_by_circid_channel_impl() returning circuit 0xb40000725ab377e0 for circ_id 2917690044, channel ID 5 (0xb40000725ab36a00)
2022-02-01 17:32:42.259 D: DEBUG: relay_lookup_conn: found conn for stream 171.
2022-02-01 17:32:42.259 D: DEBUG: circuit_receive_relay_cell: Sending to origin.
2022-02-01 17:32:42.259 D: DEBUG: connection_edge_process_relay_cell: Now seen 2113 relay cells here (command 2, stream 171).
2022-02-01 17:32:42.259 D: DEBUG: sendme_circuit_data_received: Circuit deliver_window now 997.
2022-02-01 17:32:42.259 D: DEBUG: connection_or_process_cells_from_inbuf: 75: starting, inbuf_datalen 386 (0 pending in tls object).
2022-02-01 17:32:42.259 D: DEBUG: scheduler_evt_callback: Scheduler event callback called
2022-02-01 17:32:42.260 D: DEBUG: update_socket_info: chan=5 updated socket info, limit: 34752, cwnd: 0, unacked: 0, notsent: 0, mss: 0
2022-02-01 17:32:42.260 D: DEBUG: kist_scheduler_run: Running the scheduler. 1 channels pending
2022-02-01 17:32:42.260 D: DEBUG: outbuf_table_add: scheduler init outbuf info for chan=5
2022-02-01 17:32:42.260 D: DEBUG: channel_write_packed_cell: Writing 0xb40000725ab49b40 to channel 0xb40000725ab36a00 with global ID 5
2022-02-01 17:32:42.260 D: DEBUG: update_socket_written: chan=5 wrote 543 bytes, old was 0
2022-02-01 17:32:42.260 D: DEBUG: scheduler_set_channel_state: chan 5 changed from scheduler state PENDING to PENDING
2022-02-01 17:32:42.260 D: DEBUG: channel_write_packed_cell: Writing 0xb40000725ab46d00 to channel 0xb40000725ab36a00 with global ID 5
2022-02-01 17:32:42.260 D: DEBUG: update_socket_written: chan=5 wrote 543 bytes, old was 543
2022-02-01 17:32:42.260 D: DEBUG: scheduler_set_channel_state: chan 5 changed from scheduler state PENDING to PENDING
2022-02-01 17:32:42.260 D: DEBUG: channel_write_packed_cell: Writing 0xb40000725ab45390 to channel 0xb40000725ab36a00 with global ID 5
2022-02-01 17:32:42.260 D: DEBUG: update_socket_written: chan=5 wrote 543 bytes, old was 1086
2022-02-01 17:32:42.261 D: DEBUG: scheduler_set_channel_state: chan 5 changed from scheduler state PENDING to PENDING
2022-02-01 17:32:42.261 D: DEBUG: channel_write_packed_cell: Writing 0xb40000725ab2b350 to channel 0xb40000725ab36a00 with global ID 5
2022-02-01 17:32:42.261 D: DEBUG: update_socket_written: chan=5 wrote 543 bytes, old was 1629
2022-02-01 17:32:42.261 D: DEBUG: scheduler_set_channel_state: chan 5 changed from scheduler state PENDING to PENDING
2022-02-01 17:32:42.261 D: DEBUG: channel_write_packed_cell: Writing 0xb40000725ab290a0 to channel 0xb40000725ab36a00 with global ID 5
2022-02-01 17:32:42.261 D: DEBUG: channel_flush_from_first_active_circuit: Made a circuit inactive.
2022-02-01 17:32:42.261 D: DEBUG: update_socket_written: chan=5 wrote 543 bytes, old was 2172
2022-02-01 17:32:42.261 D: DEBUG: scheduler_set_channel_state: chan 5 changed from scheduler state PENDING to WAITING_FOR_CELLS
2022-02-01 17:32:42.261 D: DEBUG: channel_write_to_kernel: Writing 2570 bytes to kernel for chan 5
2022-02-01 17:32:42.261 D: DEBUG: flush_chunk_tls: flushed 2570 bytes, 0 remain.
2022-02-01 17:32:42.261 D: DEBUG: connection_handle_write_impl: After TLS write of 2570: 0 read, 2599 written
2022-02-01 17:32:42.261 D: DEBUG: free_outbuf_info_by_ent: Freeing outbuf table entry from chan=5
2022-02-01 17:32:42.261 D: DEBUG: kist_scheduler_run: len pending=0, len to_readd=-1
2022-02-01 17:32:42.261 D: DEBUG: conn_read_callback: socket -1 wants to read.
2022-02-01 17:32:42.262 D: DEBUG: connection_dir_process_inbuf: Got data, not eof. Leaving on inbuf.
2022-02-01 17:32:42.262 D: DEBUG: conn_read_callback: socket -1 wants to read.
2022-02-01 17:32:42.262 D: DEBUG: connection_dir_process_inbuf: Got data, not eof. Leaving on inbuf.
2022-02-01 17:32:42.262 D: DEBUG: conn_read_callback: socket 75 wants to read.
2022-02-01 17:32:42.262 D: DEBUG: connection_buf_read_from_socket: 75: starting, inbuf_datalen 386 (0 pending in tls object). at_most 14858.
2022-02-01 17:32:42.262 D: DEBUG: tor_tls_read: read returned r=-1, err=-2
2022-02-01 17:32:42.262 D: DEBUG: connection_buf_read_from_socket: After TLS read of 4176: 5189 read, 0 written
2022-02-01 17:32:42.262 D: DEBUG: connection_or_process_cells_from_inbuf: 75: starting, inbuf_datalen 4562 (0 pending in tls object).
2022-02-01 17:32:42.262 D: DEBUG: channel_process_cell: Processing incoming cell_t 0x7101bb2658 for channel 0xb40000725ab36a00 (global ID 5)
2022-02-01 17:32:42.262 D: DEBUG: circuit_get_by_circid_channel_impl: circuit_get_by_circid_channel_impl() returning circuit 0xb40000725ab377e0 for circ_id 2917690044, channel ID 5 (0xb40000725ab36a00)
2022-02-01 17:32:42.262 D: DEBUG: relay_lookup_conn: found conn for stream 171.
2022-02-01 17:32:42.262 D: DEBUG: circuit_receive_relay_cell: Sending to origin.
```
```
2022-02-01 17:32:42.372 D: DEBUG: channel_process_cell: Processing incoming cell_t 0x7101bb2658 for channel 0xb40000725ab36a00 (global ID 5)
2022-02-01 17:32:42.372 D: DEBUG: circuit_get_by_circid_channel_impl: circuit_get_by_circid_channel_impl() returning circuit 0xb40000725ab377e0 for circ_id 2917690044, channel ID 5 (0xb40000725ab36a00)
2022-02-01 17:32:42.372 D: DEBUG: relay_lookup_conn: found conn for stream 171.
2022-02-01 17:32:42.372 D: DEBUG: circuit_receive_relay_cell: Sending to origin.
2022-02-01 17:32:42.373 D: DEBUG: connection_edge_process_relay_cell: Now seen 2281 relay cells here (command 2, stream 171).
2022-02-01 17:32:42.373 D: DEBUG: sendme_circuit_data_received: Circuit deliver_window now 929.
2022-02-01 17:32:42.373 D: DEBUG: connection_or_process_cells_from_inbuf: 75: starting, inbuf_datalen 5140 (0 pending in tls object).
2022-02-01 17:32:42.373 D: DEBUG: channel_process_cell: Processing incoming cell_t 0x7101bb2658 for channel 0xb40000725ab36a00 (global ID 5)
2022-02-01 17:32:42.373 D: DEBUG: circuit_get_by_circid_channel_impl: circuit_get_by_circid_channel_impl() returning circuit 0xb40000725ab377e0 for circ_id 2917690044, channel ID 5 (0xb40000725ab36a00)
2022-02-01 17:32:42.373 D: DEBUG: relay_lookup_conn: found conn for stream 171.
2022-02-01 17:32:42.373 D: DEBUG: circuit_receive_relay_cell: Sending to origin.
2022-02-01 17:32:42.373 D: DEBUG: connection_edge_process_relay_cell: Now seen 2282 relay cells here (command 2, stream 171).
2022-02-01 17:32:42.373 D: DEBUG: sendme_circuit_data_received: Circuit deliver_window now 928.
2022-02-01 17:32:42.373 D: DEBUG: connection_or_process_cells_from_inbuf: 75: starting, inbuf_datalen 4626 (0 pending in tls object).
2022-02-01 17:32:42.373 D: DEBUG: channel_process_cell: Processing incoming cell_t 0x7101bb2658 for channel 0xb40000725ab36a00 (global ID 5)
2022-02-01 17:32:42.373 D: DEBUG: circuit_get_by_circid_channel_impl: circuit_get_by_circid_channel_impl() returning circuit 0xb40000725ab377e0 for circ_id 2917690044, channel ID 5 (0xb40000725ab36a00)
2022-02-01 17:32:42.374 D: DEBUG: relay_lookup_conn: found conn for stream 171.
2022-02-01 17:32:42.374 D: DEBUG: circuit_receive_relay_cell: Sending to origin.
2022-02-01 17:32:42.374 D: DEBUG: connection_edge_process_relay_cell: Now seen 2283 relay cells here (command 2, stream 171).
2022-02-01 17:32:42.374 D: DEBUG: sendme_circuit_data_received: Circuit deliver_window now 927.
2022-02-01 17:32:42.374 D: DEBUG: connection_or_process_cells_from_inbuf: 75: starting, inbuf_datalen 4112 (0 pending in tls object).
2022-02-01 17:32:42.374 D: DEBUG: channel_process_cell: Processing incoming cell_t 0x7101bb2658 for channel 0xb40000725ab36a00 (global ID 5)
2022-02-01 17:32:42.374 D: DEBUG: circuit_get_by_circid_channel_impl: circuit_get_by_circid_channel_impl() returning circuit 0xb40000725ab377e0 for circ_id 2917690044, channel ID 5 (0xb40000725ab36a00)
2022-02-01 17:32:42.374 D: DEBUG: relay_lookup_conn: found conn for stream 171.
2022-02-01 17:32:42.374 D: DEBUG: circuit_receive_relay_cell: Sending to origin.
2022-02-01 17:32:42.374 D: DEBUG: connection_edge_process_relay_cell: Now seen 2284 relay cells here (command 2, stream 171).
2022-02-01 17:32:42.374 D: DEBUG: sendme_circuit_data_received: Circuit deliver_window now 926.
2022-02-01 17:32:42.374 D: DEBUG: connection_or_process_cells_from_inbuf: 75: starting, inbuf_datalen 3598 (0 pending in tls object).
2022-02-01 17:32:42.374 D: DEBUG: channel_process_cell: Processing incoming cell_t 0x7101bb2658 for channel 0xb40000725ab36a00 (global ID 5)
2022-02-01 17:32:42.374 D: DEBUG: circuit_get_by_circid_channel_impl: circuit_get_by_circid_channel_impl() returning circuit 0xb40000725ab377e0 for circ_id 2917690044, channel ID 5 (0xb40000725ab36a00)
2022-02-01 17:32:42.374 D: DEBUG: relay_lookup_conn: found conn for stream 171.
2022-02-01 17:32:42.375 D: DEBUG: circuit_receive_relay_cell: Sending to origin.
2022-02-01 17:32:42.375 D: DEBUG: connection_edge_process_relay_cell: Now seen 2285 relay cells here (command 2, stream 171).
2022-02-01 17:32:42.375 D: DEBUG: sendme_circuit_data_received: Circuit deliver_window now 925.
2022-02-01 17:32:42.375 D: DEBUG: connection_or_process_cells_from_inbuf: 75: starting, inbuf_datalen 3084 (0 pending in tls object).
2022-02-01 17:32:42.375 D: DEBUG: channel_process_cell: Processing incoming cell_t 0x7101bb2658 for channel 0xb40000725ab36a00 (global ID 5)
2022-02-01 17:32:42.375 D: DEBUG: circuit_get_by_circid_channel_impl: circuit_get_by_circid_channel_impl() returning circuit 0xb40000725ab377e0 for circ_id 2917690044, channel ID 5 (0xb40000725ab36a00)
2022-02-01 17:32:42.375 D: DEBUG: relay_lookup_conn: found conn for stream 171.
2022-02-01 17:32:42.375 D: DEBUG: circuit_receive_relay_cell: Sending to origin.
2022-02-01 17:32:42.375 D: DEBUG: connection_edge_process_relay_cell: Now seen 2286 relay cells here (command 2, stream 171).
2022-02-01 17:32:42.375 D: DEBUG: sendme_circuit_data_received: Circuit deliver_window now 924.
2022-02-01 17:32:42.375 D: DEBUG: connection_or_process_cells_from_inbuf: 75: starting, inbuf_datalen 2570 (0 pending in tls object).
2022-02-01 17:32:42.375 D: DEBUG: channel_process_cell: Processing incoming cell_t 0x7101bb2658 for channel 0xb40000725ab36a00 (global ID 5)
2022-02-01 17:32:42.375 D: DEBUG: circuit_get_by_circid_channel_impl: circuit_get_by_circid_channel_impl() returning circuit 0xb40000725ab377e0 for circ_id 2917690044, channel ID 5 (0xb40000725ab36a00)
2022-02-01 17:32:42.376 D: DEBUG: relay_lookup_conn: found conn for stream 171.
2022-02-01 17:32:42.376 D: DEBUG: circuit_receive_relay_cell: Sending to origin.
2022-02-01 17:32:42.376 D: DEBUG: connection_edge_process_relay_cell: Now seen 2287 relay cells here (command 2, stream 171).
2022-02-01 17:32:42.376 D: DEBUG: sendme_circuit_data_received: Circuit deliver_window now 923.
2022-02-01 17:32:42.376 D: DEBUG: connection_or_process_cells_from_inbuf: 75: starting, inbuf_datalen 2056 (0 pending in tls object).
2022-02-01 17:32:42.376 D: DEBUG: channel_process_cell: Processing incoming cell_t 0x7101bb2658 for channel 0xb40000725ab36a00 (global ID 5)
2022-02-01 17:32:42.376 D: DEBUG: circuit_get_by_circid_channel_impl: circuit_get_by_circid_channel_impl() returning circuit 0xb40000725ab377e0 for circ_id 2917690044, channel ID 5 (0xb40000725ab36a00)
2022-02-01 17:32:42.376 D: DEBUG: relay_lookup_conn: found conn for stream 171.
2022-02-01 17:32:42.376 D: DEBUG: circuit_receive_relay_cell: Sending to origin.
2022-02-01 17:32:42.376 D: DEBUG: connection_edge_process_relay_cell: Now seen 2288 relay cells here (command 2, stream 171).
2022-02-01 17:32:42.376 D: DEBUG: sendme_circuit_data_received: Circuit deliver_window now 922.
2022-02-01 17:32:42.376 D: DEBUG: connection_or_process_cells_from_inbuf: 75: starting, inbuf_datalen 1542 (0 pending in tls object).
2022-02-01 17:32:42.376 D: DEBUG: channel_process_cell: Processing incoming cell_t 0x7101bb2658 for channel 0xb40000725ab36a00 (global ID 5)
2022-02-01 17:32:42.377 D: DEBUG: circuit_get_by_circid_channel_impl: circuit_get_by_circid_channel_impl() returning circuit 0xb40000725ab377e0 for circ_id 2917690044, channel ID 5 (0xb40000725ab36a00)
2022-02-01 17:32:42.377 D: DEBUG: relay_lookup_conn: found conn for stream 171.
2022-02-01 17:32:42.377 D: DEBUG: circuit_receive_relay_cell: Sending to origin.
2022-02-01 17:32:42.377 D: DEBUG: connection_edge_process_relay_cell: Now seen 2289 relay cells here (command 2, stream 171).
2022-02-01 17:32:42.377 D: DEBUG: sendme_circuit_data_received: Circuit deliver_window now 921.
2022-02-01 17:32:42.377 D: DEBUG: connection_or_process_cells_from_inbuf: 75: starting, inbuf_datalen 1028 (0 pending in tls object).
```
Letting Tor sit there, does not produce any more log messages. The process is still shown as running.
### Possible fixes
???Alexander Færøyahf@torproject.orgAlexander Færøyahf@torproject.org