Tor issueshttps://gitlab.torproject.org/tpo/core/tor/-/issues2022-06-24T14:22:34Zhttps://gitlab.torproject.org/tpo/core/tor/-/issues/25461main event-loop spins consuming 100% of a CPU core running scheduler_set_chan...2022-06-24T14:22:34ZDhalgrenmain event-loop spins consuming 100% of a CPU core running scheduler_set_channel_stateLately have observed my exit hitting 100% cpu on the main even-loop thread, sometimes continuously, sometimes cyclically. Captured full-debug of recent cyclical event where CPU started at 30% and rose to 100%, for about one cycle. Chop...Lately have observed my exit hitting 100% cpu on the main even-loop thread, sometimes continuously, sometimes cyclically. Captured full-debug of recent cyclical event where CPU started at 30% and rose to 100%, for about one cycle. Chopped the 1G log into eight slices and took a simple function call-count histogram. What's notable is not an increase of calls during saturation, but a reduction of several that seem to relate to connection close events (conn_close_if_marked, flush_chunk). Left column is for the first slice where CPU was 30%, right column is for fourth slice where cpu was 100%. Functions with less than 1000 calls not included below, but complete histograms attached. Wrote about this on tor-relays:
https://lists.torproject.org/pipermail/tor-relays/2018-March/014730.html
This might be an attack of some kind, or perhaps a misbehavior related to the KIST scheduler.
```
append_cell_to_circuit_queue 6787 append_cell_to_circuit_queue 7280
channel_flush_from_first_active_circuit 6781 channel_flush_from_first_active_circuit 7190
channel_process_cell 11904 channel_process_cell 11813
channel_write_packed_cell 120301 channel_write_packed_cell 126330
channel_write_to_kernel 8588 channel_write_to_kernel 10048
circuit_consider_stop_edge_reading 146965 circuit_consider_stop_edge_reading 152665
circuit_get_by_circid_channel_impl 14128 circuit_get_by_circid_channel_impl 13468
circuit_receive_relay_cell 11483 circuit_receive_relay_cell 11341
circuit_resume_edge_reading 1203 circuit_resume_edge_reading 1231
conn_close_if_marked 39033 conn_close_if_marked 779
conn_read_callback 14743 conn_read_callback 15645
conn_write_callback 4531 conn_write_callback 4447
connection_add_impl 1023 connection_add_impl 739
connection_bucket_refill_helper 14787 connection_bucket_refill_helper 15842
connection_buf_read_from_socket 16196 connection_buf_read_from_socket 17152
connection_connect 1016 connection_connect 732
connection_connect_sockaddr 1016 connection_connect_sockaddr 732
connection_edge_package_raw_inbuf 237303 connection_edge_package_raw_inbuf 255347
connection_edge_process_relay_cell 22219 connection_edge_process_relay_cell 22332
connection_exit_begin_conn 3165 connection_exit_begin_conn 2315
connection_exit_connect 1050 connection_exit_connect 772
connection_handle_write_impl 9240 connection_handle_write_impl 10539
connection_or_process_cells_from_inbuf 20042 connection_or_process_cells_from_inbuf 20448
flush_chunk 38192 flush_chunk 12
flush_chunk_tls 22283 flush_chunk_tls 24061
free_outbuf_info_by_ent 8588 free_outbuf_info_by_ent 10047
outbuf_table_add 8588 outbuf_table_add 10014
read_to_chunk 6856 read_to_chunk 7254
relay_lookup_conn 8459 relay_lookup_conn 8525
relay_send_command_from_edge_ 119963 relay_send_command_from_edge_ 128738
rep_hist_note_exit_bytes 13913 rep_hist_note_exit_bytes 14534
scheduler_set_channel_state 126896 scheduler_set_channel_state 133353
update_socket_info 6719 update_socket_info 7160
update_socket_written 120297 update_socket_written 126327
```https://gitlab.torproject.org/tpo/core/tor/-/issues/25416[warn] Received http status code 404 ("Consensus is too old") from server '19...2022-06-16T15:31:48Ztoralf[warn] Received http status code 404 ("Consensus is too old") from server '194.109.206.212:80' while fetching consensus directory.the local time is ok :
```
date -u
Sat Mar 3 16:18:01 UTC 2018
cat </dev/tcp/time.nist.gov/13
ntpdate -q 2.de.pool.ntp.org
server 2a01:4f8:172:326b::2, stratum 2, offset -0.000140, delay 0.02594
server 2a01:4f8:221:3b02::101:1, strat...the local time is ok :
```
date -u
Sat Mar 3 16:18:01 UTC 2018
cat </dev/tcp/time.nist.gov/13
ntpdate -q 2.de.pool.ntp.org
server 2a01:4f8:172:326b::2, stratum 2, offset -0.000140, delay 0.02594
server 2a01:4f8:221:3b02::101:1, stratum 2, offset -0.000601, delay 0.02594
server 2a01:238:439c:1900::3:1, stratum 2, offset 0.000477, delay 0.04520
server 2a02:16d0:0:4::4, stratum 2, offset -0.002245, delay 0.03976
server 90.187.7.5, stratum 2, offset -0.002271, delay 0.04788
server 129.70.132.36, stratum 2, offset 0.002152, delay 0.04262
server 145.239.3.131, stratum 2, offset 0.001487, delay 0.03177
server 85.236.36.4, stratum 2, offset -0.000501, delay 0.03650
3 Mar 17:18:09 ntpdate[18195]: adjust time server 2a01:4f8:221:3b02::101:1 offset -0.000601 sec
```
And I still get once a day or so _:
```
Mar 03 17:18:01.000 [warn] Received http status code 404 ("Consensus is too old") from server '194.109.206.212:80' while fetching consensus directory.
Mar 03 17:18:01.000 [notice] Tor 0.3.4.0-alpha-dev (git-efc105716283bbdf) opening log file.
Mar 03 17:18:02.000 [warn] Received http status code 404 ("Consensus is too old") from server '194.109.206.212:80' while fetching consensus directory.
```
tor version is 0.3.4-alpha-devhttps://gitlab.torproject.org/tpo/core/tor/-/issues/25372relay: Allocation for compression goes very high2023-06-15T10:31:27ZDavid Gouletdgoulet@torproject.orgrelay: Allocation for compression goes very highMy relay just OOMed some circuits with filled up queue (legacy/trac#25226) but then a useful log was printed showing that the compress total allocation is huge.
```
Feb 27 20:02:55.718 [notice] We're low on memory (cell queues total all...My relay just OOMed some circuits with filled up queue (legacy/trac#25226) but then a useful log was printed showing that the compress total allocation is huge.
```
Feb 27 20:02:55.718 [notice] We're low on memory (cell queues total alloc: 232279872 buffer total alloc: 1937408, tor compress total alloc: 878586075 rendezvous cache total alloc: 4684497). Killing circuits withover-long queues. (This behavior is controlled by MaxMemInQueues.)
```
That `878586075 = ~838MB`. My relay is hovering around 1.4GB of RAM right now which means ~60% of the RAM used is in the compression subsystem.
I'm not sure where it all comes, the relay is serving directory data but I have my doubt that *compressed*, it comes down to 800+ MB...
Datapoint:
```
$ du -sh diff-cache/
131M diff-cache/
```https://gitlab.torproject.org/tpo/core/tor/-/issues/25173No Control Socket when DisableNetwork and User options are set2022-09-01T21:29:26ZiryNo Control Socket when DisableNetwork and User options are setTo successfully reproduce this, we need:
0. set DisableNetwork to 1
1. use User option as part of the Tor configuration
2. run sudo Tor from a different user in a different group
Here are the specific steps to reproduce it. I tested it...To successfully reproduce this, we need:
0. set DisableNetwork to 1
1. use User option as part of the Tor configuration
2. run sudo Tor from a different user in a different group
Here are the specific steps to reproduce it. I tested it on Debian
Stretch but it should be distribution independent:
```
user at host:~$ cat /home/user/my.torrc
DataDirectory /tmp/tor
ControlSocket /tmp/tor/control.sock
ControlSocketsGroupWritable 1
CookieAuthentication 1
CookieAuthFileGroupReadable 1
CookieAuthFile /tmp/tor/control.authcookie
SocksPort unix:/tmp/tor/socks.sock
```
```
user at host:~$ sudo /usr/bin/install -Z \
-m 02755 -o debian-tor \
-g debian-tor -d /tmp/tor
```
```
user at host:~$ ls -ld /tmp/tor/; ls -l /tmp/tor/
drwxr-s--- 2 debian-tor debian-tor 40 Feb 3 18:19 /tmp/tor/
total 0
```
```
user at host:~$ sudo /usr/bin/tor \
-f /home/user/my.torrc \
--User debian-tor \
--DisableNetwork 1
```
There should be control.sock, but not:
```
user at host:~$ ls -ld /tmp/tor/; sudo ls -l /tmp/tor/
drwx--S--- 2 debian-tor debian-tor 100 Feb 3 20:00 /tmp/tor/
total 8
-rw-r----- 1 debian-tor debian-tor 32 Feb 3 20:00 control.authcookie
-rw------- 1 debian-tor debian-tor 0 Feb 3 20:00 lock
-rw------- 1 debian-tor debian-tor 215 Feb 3 20:00 state
```
To let Tor really open the control.sock, we need to reload Tor (yes,
even though we just start it):
```
user at host:~$ ps -A | grep tor
863 ? 00:00:00 xenstore-watch
927 ? 00:00:04 tor-controlport
11851 pts/0 00:00:00 tor
```
```
user at host:~$ sudo /bin/kill -HUP 11851
```
```
user at host:~$ ls -ld /tmp/tor/; sudo ls -l /tmp/tor/
drwx--S--- 2 debian-tor debian-tor 120 Feb 3 20:01 /tmp/tor/
total 8
-rw-r----- 1 debian-tor debian-tor 32 Feb 3 20:01 control.authcookie
srw-rw---- 1 debian-tor debian-tor 0 Feb 3 20:01 control.sock
-rw------- 1 debian-tor debian-tor 0 Feb 3 20:01 lock
-rw------- 1 debian-tor debian-tor 215 Feb 3 20:01 state
```
I guess the reason Yawning was not able to reproduce it is because User
option was not set:
```
user at host:~$ sudo -u debian-tor \
/usr/bin/tor -f /home/user/my.torrc \
--DisableNetwork 1
[notice] Opening Control listener on /tmp/tor/control.sock
```
I was thinking Tor fixing /tmp/tor/ to 2700 may be the reason, but then
I cannot explain why this will work with /tmp/tor/ set to 2700:
```
user at host:~$ sudo /usr/bin/tor \
-f /home/user/my.torrc \
--User debian-tor \
--DisableNetwork 0
[notice] Opening Control listener on /tmp/tor/control.sock
```https://gitlab.torproject.org/tpo/core/tor/-/issues/24907Stop ignoring should_refuse_unknown_exits() for unauthenticated channels2022-06-24T14:53:44ZteorStop ignoring should_refuse_unknown_exits() for unauthenticated channelsThe brackets in this code are wrong in at least two ways:
* they unconditionally reject client channels
* there are two at the start
```
if ((client_chan ||
(!connection_or_digest_is_known_relay(
or_circ...The brackets in this code are wrong in at least two ways:
* they unconditionally reject client channels
* there are two at the start
```
if ((client_chan ||
(!connection_or_digest_is_known_relay(
or_circ->p_chan->identity_digest) &&
should_refuse_unknown_exits(options)))) {
```
Bugfix on d52a1e2 in legacy/trac#22060, note that 66aff2d was merged into that code in 0.3.2.Nick MathewsonNick Mathewsonhttps://gitlab.torproject.org/tpo/core/tor/-/issues/24857Tor uses 100% CPU when accessing the cache directory on Windows2022-03-16T17:59:01ZTracTor uses 100% CPU when accessing the cache directory on WindowsHi,
I have tor 0.3.1.9 running as non-exit relay.
From time to time it starts consuming 100% of CPU. Even though network traffic is minimal.
Is it normal? If not, how can I help you to investigate it?
My system is Win 7 x64 SP1, cpu i...Hi,
I have tor 0.3.1.9 running as non-exit relay.
From time to time it starts consuming 100% of CPU. Even though network traffic is minimal.
Is it normal? If not, how can I help you to investigate it?
My system is Win 7 x64 SP1, cpu is Core i5-3570.
**Trac**:
**Username**: Eugene646Alexander Færøyahf@torproject.orgAlexander Færøyahf@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/24731Stop checking routerinfos for addresses when we use microdescs for circuits2022-09-01T21:31:34ZteorStop checking routerinfos for addresses when we use microdescs for circuitsDirectory mirrors and clients that FetchUselessDescriptors check for IPv4 and IPv6 addresses in the following order:
* routerinfos (descriptors)
* routerstatus (consensus)
* microdescriptors
But they should check using the following ord...Directory mirrors and clients that FetchUselessDescriptors check for IPv4 and IPv6 addresses in the following order:
* routerinfos (descriptors)
* routerstatus (consensus)
* microdescriptors
But they should check using the following order:
* bridge routerinfos (descriptors)
* routerstatus (consensus)
If using microdescriptors for circuits:
* microdescriptors
Otherwise:
* routerinfos (descriptors)
There is code that implements this algorithm in commits decb0636e2, 1d1c927b9a, and 4979ec3c17 of my bug23975_tree branch.
But this adds overhead to every address lookup when building circuits.
Maybe we can make it faster by:
* not parsing routerinfos or microdescs if we aren't using them for circuits, or
* putting a canonical address in node_t, updating it whenever ri, rs, or md change, and always using ithttps://gitlab.torproject.org/tpo/core/tor/-/issues/24668sched: scheduler_compare_channels() function will never pick a channel with n...2022-09-01T21:31:34ZDavid Gouletdgoulet@torproject.orgsched: scheduler_compare_channels() function will never pick a channel with no active circuits.In the schedulers, scheduler_compare_channels() is used to decide which channel is 'best' to write data from. It delegates to circuitmux_compare_muxes(), which delegates to ewma_cmp_cmux().
But ewma_cmp_cmux() will never prefer a cmux ...In the schedulers, scheduler_compare_channels() is used to decide which channel is 'best' to write data from. It delegates to circuitmux_compare_muxes(), which delegates to ewma_cmp_cmux().
But ewma_cmp_cmux() will never prefer a cmux with no active circuits on it! So a channel without active circuits will never be picked by the scheduler to flush from a circuit, which is what triggers flushing from its destroy queue. So the channel will stay around forever, never flushing.
To fix this one, we probably have to fix ewma_cmp_cmux() to look at destroy cells too (somehow). And we still need to make sure that the scheduler's position in the heap changes when the data considered by scheduler_compare_channels() changes [*].
[*] I'm not convinced that we're even doing this right with the current scheduler_compare_channels() code. :(https://gitlab.torproject.org/tpo/core/tor/-/issues/24449sched: KIST scheduler should handle limited or failed connection write2022-09-01T21:31:34ZDavid Gouletdgoulet@torproject.orgsched: KIST scheduler should handle limited or failed connection writeThis is specific to KIST as far as I can tell.
KIST will flush cells one by one from the circuit queue to the outbuf as long as the socket TCP limit allows it. Now, I've seen on a normal relay using KIST flushing **164** cells at once o...This is specific to KIST as far as I can tell.
KIST will flush cells one by one from the circuit queue to the outbuf as long as the socket TCP limit allows it. Now, I've seen on a normal relay using KIST flushing **164** cells at once onto the outbuf. This is fine, it is only 83968 bytes.
Then, at some point, it will write to the kernel with `connection_handle_write(conn, 0)`. The returned value is ignored which is not good because that function will limit the number of bytes written to up to a maximum of ~8KB (~16 cells):
```
max_to_write = force ? (ssize_t)conn->outbuf_flushlen
: connection_bucket_write_limit(conn, now);
```
We do not call the function with `force = 1` which would make us flush them all. And we probably don't want to do that because force=0 is respecting our bandwidth rate if any.
So, I think we might want to have KIST to be a bit more wise here and on a per-channel basis, decide on a maximum number of cells it can flush which would respect our bucket size and priority?https://gitlab.torproject.org/tpo/core/tor/-/issues/23712sched: DESTROY cell on a circuit bypasses the scheduler2022-09-01T21:31:33ZDavid Gouletdgoulet@torproject.orgsched: DESTROY cell on a circuit bypasses the schedulerIf you look at `circuitmux_append_destroy_cell()`, it is the one appending a DESTROY cell to the cmux queue and then calls `channel_flush_from_first_active_circuit()` if no writes are pending that is if the outbuf is empty (also looks at...If you look at `circuitmux_append_destroy_cell()`, it is the one appending a DESTROY cell to the cmux queue and then calls `channel_flush_from_first_active_circuit()` if no writes are pending that is if the outbuf is empty (also looks at the out queue but that is always empty legacy/trac#23709).
In the case the flush is triggered, the cell is immediately put in the outbuf and written to kernel by libevent which completely bypasses the scheduler. Maybe it is what we want that is go as fast as we can in destroying a circuit? Don't know but it has this effect on the scheduler where the channel is scheduled with a "wants_to_write" event from the connection subsystem and ultimately the channel gets scheduled with nothing in the queue because it is already on the outbuf. For KIST, this is not ideal because KIST should control the flow of data to the kernel.
It seems there are two places we queue cells into a cmux queue: `circuitmux_append_destroy_cell()` and `append_cell_to_circuit_queue()`. The latter triggers a "has waiting cells" for the scheduler which is what we want but the former just bypasses it.
I think it should simply trigger that notify to the scheduler instead of flushing it by itself.https://gitlab.torproject.org/tpo/core/tor/-/issues/23711sched: KIST writes to kernel and get a "wants to write" notification right after2022-09-01T21:31:33ZDavid Gouletdgoulet@torproject.orgsched: KIST writes to kernel and get a "wants to write" notification right afterKIST scheduler does call a write to kernel contrary to the vanilla scheduler. This is done through `channel_write_to_kernel()` which calls `connection_handle_write()`.
That last function will ultimately call `connection_or_flushed_some(...KIST scheduler does call a write to kernel contrary to the vanilla scheduler. This is done through `channel_write_to_kernel()` which calls `connection_handle_write()`.
That last function will ultimately call `connection_or_flushed_some()` which triggers a `scheduler_channel_wants_writes()` because of this condition:
```
datalen = connection_get_outbuf_len(TO_CONN(conn));
if (datalen < OR_CONN_LOWWATER) {
scheduler_channel_wants_writes(TLS_CHAN_TO_BASE(conn->chan));
```
That is OK if `datalen > 0` but useless if `datalen == 0`. For KIST, it makes the channel go back in pending state and scheduled because it wants to write. But then if the outbuf or the cmux queue is empty, we end up scheduling a channel that actually does NOT need to write at all.
Could be the fix here is probably simple as:
```
if (datalen > 0 && datalen < OR_CONN_LOWWATER) {
```
I suspect with KIST, the datalen will always be 0 because KIST in theory controls exactly what goes in the outbuf and what can be written to the kernel so when it triggers a connection write(), the entire outbuf should be drained (in theory). So the effect of this is that every write to the kernel from KIST triggers a useless "wants to write" event rescheduling the channel. Note that this only happens if the channel is in `SCHED_CHAN_WAITING_TO_WRITE` state.https://gitlab.torproject.org/tpo/core/tor/-/issues/23570Tor sometimes loses the last few log lines on shutdown on macOS2022-09-01T21:31:33ZteorTor sometimes loses the last few log lines on shutdown on macOSThere seems to be a race condition between writing log lines to log files, and some other tor shutdown/free/process termination.
When I'm using chutney on macOS 10.12, I see these log lines in 000a:
```
Sep 19 10:53:00.280 [notice] Inte...There seems to be a race condition between writing log lines to log files, and some other tor shutdown/free/process termination.
When I'm using chutney on macOS 10.12, I see these log lines in 000a:
```
Sep 19 10:53:00.280 [notice] Interrupt: we have stopped accepting new connections, and will shut down in 2 seconds. Interrupt again to exit now.
Sep 19 10:53:00.870 [notice] Time to fetch any signatures that we're missing.
Sep 19 10:53:01.871 [notice] Time to publish the consensus and discard old votes
Sep 19 10:53:01.874 [notice] Published ns consensus
Sep 19 10:53:01.876 [notice] Choosing expected valid-after time as 2017-09-19 00:53:10: consensus_set=1, interval=10
Sep 19 10:53:01.876 [notice] Published microdesc consensus
Sep 19 10:53:01.879 [notice] Choosing expected valid-after time as 2017-09-19 00:53:10: consensus_set=1, interval=10
Sep 19 10:53:01.879 [notice] Choosing expected valid-after time as 2017-09-19 00:53:10: consensus_set=1, interval=10
```
But these lines in 001a:
```
Sep 19 10:53:00.281 [notice] Interrupt: we have stopped accepting new connections, and will shut down in 2 seconds. Interrupt again to exit now.
Sep 19 10:53:01.037 [notice] Time to publish the consensus and discard old votes
Sep 19 10:53:01.039 [notice] Published ns consensus
Sep 19 10:53:01.040 [notice] Choosing expected valid-after time as 2017-09-19 00:53:10: consensus_set=1, interval=10
Sep 19 10:53:01.041 [notice] Published microdesc consensus
Sep 19 10:53:01.043 [notice] Choosing expected valid-after time as 2017-09-19 00:53:10: consensus_set=1, interval=10
Sep 19 10:53:01.043 [notice] Choosing expected valid-after time as 2017-09-19 00:53:10: consensus_set=1, interval=10
Sep 19 10:53:02.038 [notice] Clean shutdown finished. Exiting.
Sep 19 10:53:02.038 [notice] SIGINT received a second time; exiting now.
```
But as far as I can tell, both are executing exactly the same code.https://gitlab.torproject.org/tpo/core/tor/-/issues/23168Guard sample calls relay descriptors a "consensus"2022-02-07T19:39:17ZteorGuard sample calls relay descriptors a "consensus"[info] router_load_routers_from_string: 96 elements to add
[info] sampled_guards_update_from_consensus: Updating sampled guard status based on received consensus.
The message should either say "received directory document(s)", or actual...[info] router_load_routers_from_string: 96 elements to add
[info] sampled_guards_update_from_consensus: Updating sampled guard status based on received consensus.
The message should either say "received directory document(s)", or actually describe the directory document it just received.https://gitlab.torproject.org/tpo/core/tor/-/issues/22495Partial write in key-pinning-journal results in corrupted line2022-09-01T21:31:33ZGeorge KadianakisPartial write in key-pinning-journal results in corrupted lineTor sees a corrupted line in its `key-pinning-journal` because of a truncated line:
```
@opened-at 2016-08-07 20:49:24
wyD2E2ZG/fDQFbiQbz63VcvSKFo TNh6rQcairXqej0dOoRWOF93Zra+o+x+9b0VbiAG8zI
DahGjy7upvyovkp1sJ1C+/wKmT4 TNh6rQcairXqej0dO...Tor sees a corrupted line in its `key-pinning-journal` because of a truncated line:
```
@opened-at 2016-08-07 20:49:24
wyD2E2ZG/fDQFbiQbz63VcvSKFo TNh6rQcairXqej0dOoRWOF93Zra+o+x+9b0VbiAG8zI
DahGjy7upvyovkp1sJ1C+/wKmT4 TNh6rQcairXqej0dOoRWOF93Zra+o+x+9b0VbiAG8zI
...
wyD2E2ZG/fDQFbiQbz63VcvSKFo TNh6rQcairXqej0dOoRWOF93Zra+o+x+9b0VbiAG8zI
DahGjy7upvyovkp1sJ1C+/wKmT4 TNh6rQcairXq
@opened-at 2016-10-05 20:02:15
DahGjy7upvyovkp1sJ1C+/wKmT4 TNh6rQcairXqej0dOoRWOF93Zra+o+x+9b0VbiAG8zI
```
Nick says Tor uses `fwrite()` when it should be using `write()` to write to that file.https://gitlab.torproject.org/tpo/core/tor/-/issues/21967tor fails to kill its pluggable transports when it's done using them2022-02-28T19:43:31ZGeorge Kadianakistor fails to kill its pluggable transports when it's done using themSeems like Tor does not successfully kill obfs4proxy (or any other transport) when it's no longer used. It's unclear why. Maybe obfs4proxy never receives the SIGTERM, or never notices that the stdin is closed?
Experiment:
- Startup Tor ...Seems like Tor does not successfully kill obfs4proxy (or any other transport) when it's no longer used. It's unclear why. Maybe obfs4proxy never receives the SIGTERM, or never notices that the stdin is closed?
Experiment:
- Startup Tor Browser using default obfs4 bridges.
- Check that obfs4proxy process has spawned (`ps -fax | grep obfs`)
- Reconfigure Tor Browser to not use bridges at all.
- Check that obfs4proxy process is still there...Alexander Færøyahf@torproject.orgAlexander Færøyahf@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/21525Bootstrapping authorities sometimes expect a vote valid-after time of 02022-09-01T21:31:33ZteorBootstrapping authorities sometimes expect a vote valid-after time of 0`Rejecting vote from 127.0.0.1 with valid-after time of 2017-02-22 05:41:00; we were expecting 1970-01-01 00:00:001`
This doesn't stop them from bootstrapping, but I wonder if it could ever become a problem?`Rejecting vote from 127.0.0.1 with valid-after time of 2017-02-22 05:41:00; we were expecting 1970-01-01 00:00:001`
This doesn't stop them from bootstrapping, but I wonder if it could ever become a problem?https://gitlab.torproject.org/tpo/core/tor/-/issues/21508POSIX and Windows may interpret directory document whitespace differently2022-09-01T21:31:33ZteorPOSIX and Windows may interpret directory document whitespace differentlyon Windows, _atoi64 defines whitespace as:
A whitespace consists of space or tab characters
https://msdn.microsoft.com/en-us/library/czcad93k.aspx
But on POSIX-derived platforms, stroul* define whitespace as isspace(), which is:
\t ...on Windows, _atoi64 defines whitespace as:
A whitespace consists of space or tab characters
https://msdn.microsoft.com/en-us/library/czcad93k.aspx
But on POSIX-derived platforms, stroul* define whitespace as isspace(), which is:
\t \n \v \f \r " "
This affects at least tor_parse_uint64, and perhaps other functions.
It could mean that some numbers are interpreted differently by Windows and POSIX platforms, but since dir-spec.txt defines whitespace as space or tab, this is unlikely.https://gitlab.torproject.org/tpo/core/tor/-/issues/20531rewrite_node_address_for_bridge and networkstatus_set_current_consensus can c...2022-09-01T21:31:33Zteorrewrite_node_address_for_bridge and networkstatus_set_current_consensus can conflictWhen a relay is configured as a bridge, both rewrite_node_address_for_bridge and networkstatus_set_current_consensus update that relay's node, as do any descriptor downloads.
This can be problematic if the details they use differ.
One ...When a relay is configured as a bridge, both rewrite_node_address_for_bridge and networkstatus_set_current_consensus update that relay's node, as do any descriptor downloads.
This can be problematic if the details they use differ.
One user reports this can lead to direct connections being made to the relay/bridge, even when a pluggable transport is configured. We have been unable to confirm this.
https://lists.torproject.org/pipermail/tor-dev/2016-November/011618.htmlhttps://gitlab.torproject.org/tpo/core/tor/-/issues/19984Use a better set of comparison/evaluation functions for deciding which connec...2023-06-15T10:31:27ZNick MathewsonUse a better set of comparison/evaluation functions for deciding which connections to kill when OOSOur existing OOS code kills low-priority OR connections. But really, we need to look at all connections that an adversary might be able to create (especially dir and exit connections), or else an adversary will be able to open a bunch of...Our existing OOS code kills low-priority OR connections. But really, we need to look at all connections that an adversary might be able to create (especially dir and exit connections), or else an adversary will be able to open a bunch of those, and force us to kill as many OR connections as they want.
This problem is the reason that DisableOOSCheck is now on-by-default.https://gitlab.torproject.org/tpo/core/tor/-/issues/19853ServerDNSAllowNonRFC953Hostnames affects clients, and AllowNonRFC953Hostnames...2022-02-07T19:38:32ZteorServerDNSAllowNonRFC953Hostnames affects clients, and AllowNonRFC953Hostnames affects serversIt looks like the code and man page entry for ServerDNSAllowNonRFC953Hostnames was copied straight from AllowNonRFC953Hostnames, which is the equivalent client option.
I think this is ok as-is, because even though both options affect bo...It looks like the code and man page entry for ServerDNSAllowNonRFC953Hostnames was copied straight from AllowNonRFC953Hostnames, which is the equivalent client option.
I think this is ok as-is, because even though both options affect both client and server, tor instances typically only run as clients or servers, not both.
However, the manual page entries could be updated to clarify that the options are synonyms, and affect both clients and exits.