Trac issueshttps://gitlab.torproject.org/legacy/trac/-/issues2020-06-13T15:03:39Zhttps://gitlab.torproject.org/legacy/trac/-/issues/20742prop224: Implement stealth client authorization2020-06-13T15:03:39ZGeorge Kadianakisprop224: Implement stealth client authorizationprop224 currently does not specify stealth client authorization.
This is a feature from `rend-spec.txt` which makes the HS create a unique onion address for each authorized client. This way revoked clients cannot get presense informatio...prop224 currently does not specify stealth client authorization.
This is a feature from `rend-spec.txt` which makes the HS create a unique onion address for each authorized client. This way revoked clients cannot get presense information about the hidden service, since they don't know the onion addresses of other clients.
This is useful for cases where authorized clients have a chance of turning adversarial and there is a need for total revocation.
tl;dr: We need to specify stealth auth in prop224, and implement it.Tor: unspecifiedGeorge KadianakisGeorge Kadianakishttps://gitlab.torproject.org/legacy/trac/-/issues/21346Clients with NoIPv4Traffic should only choose IPv6-supporting Exits2020-06-13T15:05:48ZteorClients with NoIPv4Traffic should only choose IPv6-supporting ExitsTor logs a warning when this happens in connection_ap_get_begincell_flags(), but it should actually fail.
Earlier in the process, it should choose an IPv6-supporting exit when NoIPv4Traffic is set (and choose an IPv4-supporting exit whe...Tor logs a warning when this happens in connection_ap_get_begincell_flags(), but it should actually fail.
Earlier in the process, it should choose an IPv6-supporting exit when NoIPv4Traffic is set (and choose an IPv4-supporting exit when NoIPv6 traffic is set).Tor: 0.3.5.x-finalNeel Chauhanneel@neelc.orgNeel Chauhanneel@neelc.orghttps://gitlab.torproject.org/legacy/trac/-/issues/24509circuit_can_use_tap() should only allow TAP for v2 onion services2020-06-13T15:18:23Zteorcircuit_can_use_tap() should only allow TAP for v2 onion servicescircuit_can_use_tap() checks the circuit purpose to make sure that it's an onion service circuit. But it should also check that the circuit is for a v2 onion service before allowing TAP.
There should be a field in the circuit or extend_...circuit_can_use_tap() checks the circuit purpose to make sure that it's an onion service circuit. But it should also check that the circuit is for a v2 onion service before allowing TAP.
There should be a field in the circuit or extend_info that we can use for this.
This is security-low, because it's a defence in depth mechanism that doesn't provide as much defence as we thought.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/25461main event-loop spins consuming 100% of a CPU core running scheduler_set_chan...2020-06-13T15:22:53ZDhalgrenmain event-loop spins consuming 100% of a CPU core running scheduler_set_channel_stateLately have observed my exit hitting 100% cpu on the main even-loop thread, sometimes continuously, sometimes cyclically. Captured full-debug of recent cyclical event where CPU started at 30% and rose to 100%, for about one cycle. Chop...Lately have observed my exit hitting 100% cpu on the main even-loop thread, sometimes continuously, sometimes cyclically. Captured full-debug of recent cyclical event where CPU started at 30% and rose to 100%, for about one cycle. Chopped the 1G log into eight slices and took a simple function call-count histogram. What's notable is not an increase of calls during saturation, but a reduction of several that seem to relate to connection close events (conn_close_if_marked, flush_chunk). Left column is for the first slice where CPU was 30%, right column is for fourth slice where cpu was 100%. Functions with less than 1000 calls not included below, but complete histograms attached. Wrote about this on tor-relays:
https://lists.torproject.org/pipermail/tor-relays/2018-March/014730.html
This might be an attack of some kind, or perhaps a misbehavior related to the KIST scheduler.
```
append_cell_to_circuit_queue 6787 append_cell_to_circuit_queue 7280
channel_flush_from_first_active_circuit 6781 channel_flush_from_first_active_circuit 7190
channel_process_cell 11904 channel_process_cell 11813
channel_write_packed_cell 120301 channel_write_packed_cell 126330
channel_write_to_kernel 8588 channel_write_to_kernel 10048
circuit_consider_stop_edge_reading 146965 circuit_consider_stop_edge_reading 152665
circuit_get_by_circid_channel_impl 14128 circuit_get_by_circid_channel_impl 13468
circuit_receive_relay_cell 11483 circuit_receive_relay_cell 11341
circuit_resume_edge_reading 1203 circuit_resume_edge_reading 1231
conn_close_if_marked 39033 conn_close_if_marked 779
conn_read_callback 14743 conn_read_callback 15645
conn_write_callback 4531 conn_write_callback 4447
connection_add_impl 1023 connection_add_impl 739
connection_bucket_refill_helper 14787 connection_bucket_refill_helper 15842
connection_buf_read_from_socket 16196 connection_buf_read_from_socket 17152
connection_connect 1016 connection_connect 732
connection_connect_sockaddr 1016 connection_connect_sockaddr 732
connection_edge_package_raw_inbuf 237303 connection_edge_package_raw_inbuf 255347
connection_edge_process_relay_cell 22219 connection_edge_process_relay_cell 22332
connection_exit_begin_conn 3165 connection_exit_begin_conn 2315
connection_exit_connect 1050 connection_exit_connect 772
connection_handle_write_impl 9240 connection_handle_write_impl 10539
connection_or_process_cells_from_inbuf 20042 connection_or_process_cells_from_inbuf 20448
flush_chunk 38192 flush_chunk 12
flush_chunk_tls 22283 flush_chunk_tls 24061
free_outbuf_info_by_ent 8588 free_outbuf_info_by_ent 10047
outbuf_table_add 8588 outbuf_table_add 10014
read_to_chunk 6856 read_to_chunk 7254
relay_lookup_conn 8459 relay_lookup_conn 8525
relay_send_command_from_edge_ 119963 relay_send_command_from_edge_ 128738
rep_hist_note_exit_bytes 13913 rep_hist_note_exit_bytes 14534
scheduler_set_channel_state 126896 scheduler_set_channel_state 133353
update_socket_info 6719 update_socket_info 7160
update_socket_written 120297 update_socket_written 126327
```Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/25550improve continuous integration support2020-06-13T15:23:20ZTaylor Yuimprove continuous integration supportTor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/26333Write trac templates for bug reports / other tickets, and link them from some...2020-06-13T15:26:31ZNick MathewsonWrite trac templates for bug reports / other tickets, and link them from somewhere usefulIt would be awesome if we could have templates for opening better trac tickets, in a way to actually help people include all the necessary info and not have a hard time figure out what they're supposed to say.
This might be an "internal...It would be awesome if we could have templates for opening better trac tickets, in a way to actually help people include all the necessary info and not have a hard time figure out what they're supposed to say.
This might be an "internal services" ticket, but before we can get it there, we need to figure out what we actually want.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/26334Investigate how much our CI performance would improve (if at all) with paid b...2020-06-13T15:26:32ZNick MathewsonInvestigate how much our CI performance would improve (if at all) with paid buildersTor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/27842Consider end-to-end introduction ACKs2020-06-13T15:32:07ZGeorge KadianakisConsider end-to-end introduction ACKsRight now introduction points send an ACK back to the client without making sure that the service heard the introduction request or can act on it.
Perhaps in the future we can do end-to-end ACKs from the service to the client as a means...Right now introduction points send an ACK back to the client without making sure that the service heard the introduction request or can act on it.
Perhaps in the future we can do end-to-end ACKs from the service to the client as a means of minimizing protocol bugs and issues.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/25783Circuit creation loop when primary guards are unreachable2020-06-13T15:32:14ZGeorge KadianakisCircuit creation loop when primary guards are unreachableI was offline for a few hours today while Tor was running. At some point I went back online but I noticed that Tor was stuck on a circuit creation loop which it did not exit until it marked one of its primary guards as retriable (which c...I was offline for a few hours today while Tor was running. At some point I went back online but I noticed that Tor was stuck on a circuit creation loop which it did not exit until it marked one of its primary guards as retriable (which can take lots of time). While in the loop, Tor made one circuit per second.
I spent a good part of today debugging this. I think the issue is that our guard algorithm changes the circuit state of circuits that don't use primary guards to `CIRCUIT_STATE_GUARD_WAIT` in `circuit_build_no_more_hops()`. Then in `circuit_expire_building()` we consider those waiting circuits as not `CIRCUIT_STATE_OPEN` and expire them quickly with the 2s build timeout. Then we make more, and then expire them, ad infinitum, until a primary guards becomes retriable and breaks the circle.
Here is the loop
----
Tor thinks it needs a pre-emptive circuit:
```
Apr 11 14:47:21.000 [info] circuit_build_times_set_timeout(): Set circuit build timeout to 2s (1500.000000ms, 60000.000000ms, Xm: 525, a: 2.177536, r: 0.121588) based on 403 circuit times
Apr 11 14:47:21.000 [info] circuit_predict_and_launch_new(): Have 4 clean circs (3 internal), need another exit circ.
Apr 11 14:47:21.000 [info] origin_circuit_new(): Circuit 139 chose an idle timeout of 2967 based on 2875 seconds of predictive building remaining.
```
Tor picks guard, picks timeouts and connects to it:
```
Apr 11 14:47:21.000 [warn] No primary guards available. Selected confirmed guard ENiGMA ($42B4F52C5B11E4D39855F654955425B0D5A0598B) for circuit. Will try other guards before using this circuit.
Apr 11 14:47:22.000 [warn] Recorded success for confirmed guard ENiGMA ($42B4F52C5B11E4D39855F654955425B0D5A0598B)
Apr 11 14:47:22.000 [info] circuit_build_no_more_hops(): circuit built!
```
Tor marks the circuit as timeout by calling
`circuit_build_times_mark_circ_as_measurement_only()` in
`circuit_expire_building()` and starts making a new predictive circuit (loop!):
```
Apr 11 14:47:23.000 [info] circuit_expire_building(): Deciding to count the timeout for circuit 139
Apr 11 14:47:23.000 [info] circuit_predict_and_launch_new(): Have 4 clean circs (3 internal), need another exit circ.
```
after a minute finally Tor ditches circuit which has been repurposed as `CIRCUIT_PURPOSE_C_MEASURE_TIMEOUT`:
```
Apr 11 14:48:22.000 [info] circuit_expire_building(): Deciding to count the timeout for circuit 139
Apr 11 14:48:22.000 [info] circuit_expire_building(): Abandoning circ 139 5.9.121.207:443:2179853168 (state 0,3:waiting to see how other guards perform, purpose 14, len 3)
Apr 11 14:48:22.000 [info] pathbias_check_close(): Circuit 139 remote-closed without successful use for reason -3. Circuit purpose 14 currently 0,waiting to see how other guards perform. Len 3.
```Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/2681brainstorm ways to let Tor clients use yesterday's consensus more safely2022-03-22T13:28:40ZRoger Dingledinebrainstorm ways to let Tor clients use yesterday's consensus more safelyRight now Tor clients won't use a consensus that's 25 hours old. But if the directory authorities don't agree on a consensus for a day, things can go bad. We need to investigate other tradeoffs in this space than the one we've currently ...Right now Tor clients won't use a consensus that's 25 hours old. But if the directory authorities don't agree on a consensus for a day, things can go bad. We need to investigate other tradeoffs in this space than the one we've currently picked.
For instance: if you got your directory consensus info when it was valid, but you haven't been able to get any new consensus, perhaps you should be more forgiving about the timestamp on the consensus you have. That's a slightly different scenario than believing a new consensus that's 48 hours old.
Another option is just to change 24 to 48, which probably doesn't put clients at much greater harm, but gives us a lot more breathing room for mistakes.
The implementation side of this will be tricky, because we'll need to make sure that clients can handle descriptors that are 36 hours out of date too. We started implementing that feature several times, but I think we've never finished it.Tor: unspecified